Digiphile

Digiphile

Law Practice

Digiphile - Data advice that is Simple. Strategic. Actionable.

About us

Digiphile is a challenger law firm specialising in UK and EU data protection and cybersecurity advice. Our clients are global technology companies at the cutting edge of what they do. Law is complex, but it doesn’t have to be. Lawyers all too often provide data protection advice that is riddled with jargon, does not take account of the bigger picture, and is impossible to implement in practice. That’s not what we do. Digiphile’s advice always follows three guiding principles – to be “Simple. Strategic. Actionable.”

Website
www.digiphile.com
Industry
Law Practice
Company size
2-10 employees
Headquarters
London
Type
Privately Held
Founded
2023

Locations

Employees at Digiphile

Updates

  • View organization page for Digiphile, graphic

    3,016 followers

    Today’s post is a deep dive into the key requirements of #NIS2 and how they impact your business. 🔐 Cybersecurity 🔐 #NIS2 mandates a comprehensive risk management strategy that requires Essential and Important entities to assess cyber risks, run cybersecurity audits, have a business continuity plan to mitigate potential disruptions, verify the security of their supply chain, and much more. 📣 Incident reporting 📣 #NIS2 requires Essential and Important entities to be on the lookout for ‘significant incidents’ and ‘cyber threats’. The former must be reported to competent authorities within 24h by submitting an early warning, and from there there is a strict timeline to follow to keep the authorities apprised. 📬 Customer Notifications 📬 #NIS2 also requires to promptly inform their customers of both significant incidents and cyberthreats without undue delay. If you’re new to #NIS2, then be sure to also check out our earlier #NIS2 posts, which provide a brief overview of #NIS2 and its aims here: https://lnkd.in/eMkkWt8C and explain the types of entities it applies to here: https://lnkd.in/eJRyP6P5 Thanks to our #NIS2 expert Marco Piana for his insights in preparing this post!

  • View organization page for Digiphile, graphic

    3,016 followers

    Step up, step up - understand your #AIAct incident reporting responsibilities here 👇

    View profile for Phil Lee, graphic

    Managing Director, Digiphile - Data advice that is Simple. Strategic. Actionable.

    How does incident reporting work under the #AIAct? It's a bit more complex than you might imagine. The precise rules vary depending on whether the AI system is "high risk" or not, whether the incident itself is "serious", whether you are a provider or deployer, and, if a provider, whether you provide the impacted AI system or a #GPAI model integrated into it. Got all that? Don't worry if not - the Digiphile infographic below should help:

    • No alternative text description for this image
  • Digiphile reposted this

    View profile for Michael Brown, graphic

    Privacy, Technology and AI Lawyer / Legal Consultant

    Are you interested in the likely trajectory of UK policy-making and law on data protection, AI and digital regulation? I expect the answer is a resounding yes! Therefore, here's a quick and hopefully helpful overview of the Labour Party’s general election manifesto (https://lnkd.in/ecf4r2db) on these issues. The manifesto was published on Friday and, according to almost all polls, the party is expected to form a majority government following the general election on 4 July. 1. Data Protection – the manifesto is mostly silent on the topic. For example, it makes no reference to resurrecting the Data Protection and Digital Information Bill, which failed to be enacted prior to the dissolution of Parliament. That said, the document does flag that “regulators are currently ill-equipped to deal with the dramatic development of new technologies” and so proposes the creation of a new “Regulation Innovation Office” which will “help regulators update regulation, speed up approval timelines, and co-ordinate issues that span existing boundaries”. Given the consistent and rapid development of data-driven technologies, it seems likely that the UK Information Commissioner’s Office will work closely with this new governmental office. The manifesto also suggests the creation of a new “National Data Library” to combine existing research programmes, enable access to public sector data and assist in the delivery of data-driven public services.   2. AI – the Labour Party seems reasonably bullish on AI-related opportunities e.g. by discussing the removal of planning barriers for the building of new data centres and highlighting the transformative impact of AI on ill health detection and diagnosis. Notably, the manifesto highlights the party’s intent to introduce “binding regulation on the handful of companies developing the most powerful AI models”.  No mention is made of any deeper or more cross-cutting AI legislation, equivalent to the EU AI Act. The document also proposes a prohibition on the creation of sexually explicit deepfakes. 3. Digital regulation – the manifesto describes the party’s plans to “build on” the Online Safety Act, accelerating the implementation of its provisions and exploring further measures to enable online safety, especially in relation to social media. The Labour Party further plans to provide coroners with “more powers to access information held by technology companies after a child’s death.”

    My plan for change – The Labour Party

    My plan for change – The Labour Party

    https://labour.org.uk

  • View organization page for Digiphile, graphic

    3,016 followers

    The EU-US Data Privacy Framework (DPF) and UK-US Extension have been up-and-running and working well for some time now - but the Swiss-US DPF has been the outlier. What's been going on? In order for transfers to be made under the Swiss-US DPF, the Swiss Federal Council needs to recognise the Swiss-US DPF as adequate. And, in order for *that* to happen, the US Attorney General first needed to designate Switzerland as a 'qualifying state' for the purposes of US Executive Order 14086. This designation would allow Switzerland to benefit from the redress mechanisms under EO 14086 (i.e. to raise DPF complaints to the US Civil Liberties Protection Officer and, if needed, the Data Protection Review Court). It now seems that Switzerland has achieved its 'qualifying state' designation from the US AG - see below and here (https://lnkd.in/e_YuBmHr) so, with any luck, Swiss adequacy recognition for the DPF should happen soon. 🤞 Sincere thanks to our good friend David Rosenthal at VISCHER for bringing this to our attention.

  • Digiphile reposted this

    View profile for Phil Lee, graphic

    Managing Director, Digiphile - Data advice that is Simple. Strategic. Actionable.

    Under Article 50(2) of the #AIAct, providers of generative AI systems have to ensure that any audio, image, video or text content they generate are "marked in a machine-readable format and detectable as artificially generated or manipulated". We can expect regulatory guidance in the form of Codes of Practice to be produced by the AI Office. It'll still be a couple of years before this provision comes into effect, but I saw an early indication of how it is likely to work just yesterday, following a post I had shared (https://lnkd.in/emWzK27d) in which I used a cartoon graphic of a robot standing next to an explosion, which I'd generated using ChatGPT. When I looked at the post later in the day, I noticed it sported a little "CR" icon, in its top left corner. Curious as to what this was, I clicked on it and it opened a "Content credentials" dialogue box, indicating that the image had been generated by OpenAI. What's most remarkable about this is that I didn't download the image from ChatGPT (because the download was in a .webp format, not recognised by LinkedIn), but had instead screenshot it and saved it as .png file. Despite this, LinkedIn still recognised it as AI generated. Following links in this content credentials dialogue box took me to a page explaining that LinkedIn has adopted the  Coalition for Content Provenance and Authenticity (C2PA) to help identify AI-generated content, where it has been "cryptographically signed using C2PA Content Credentials".  The goal of C2PA "is to enable consumers to trace the source and authenticity of media content, including when generative AI use is detected.". If you're interested to read more, see here: https://lnkd.in/euXPjUvm So this is what the future for AI generative content labelling will likely look like, and you should expect to see this type of label cropping up more often across digital media. In the meantime, good on OpenAI and LinkedIn for proactively adopting these measures now, and not just waiting until it became a legal obligation.

    • No alternative text description for this image
    • No alternative text description for this image
  • Digiphile reposted this

    View profile for Phil Lee, graphic

    Managing Director, Digiphile - Data advice that is Simple. Strategic. Actionable.

    "It's been emotional", says Vinnie Jones towards the end of Lock, Stock and Two Smoking Barrels. Those words kept running through my mind as I read the AI Act's rules on emotional inference over the past few days. The AI Act defines the concept of an 'emotion recognition system' in Art 3(39) as (paraphrasing) an AI system that identifies or infers emotions "on the basis of ... biometric data". So far, so good. So where is this term used? Surely among the prohibited AI practices listed in Art 5 you might think, which expressly prohibit "AI systems to infer emotions of a natural person in the areas of workplace and education institutions"? Perhaps too in the high-risk AI systems listed in Annex III, which prohibit "AI systems intended to be used for emotion recognition" (i.e. not just in the workplace or education)? But here's the thing: while these provisions refer to "infer[ring] emotions" and "emotion recognition", neither use the defined term 'emotion recognition system' (which, remember, is defined by reference to the use of biometrics). This raises the questions: (a) whether this omission is intentional, and (b) if so, whether these provisions therefore capture AI systems enabling emotional inference *without* using biometric data - perhaps, say, by inferring emotion within written text? This thought seems easier to shoot down for Annex III (whose reference to "AI systems intended to be used for emotion recognition" is parked under an overall high-risk heading of 'Biometrics'). So an AI system that enables emotion recognition is high-risk only if it uses biometrics - or, in other words, an 'emotion recognition system'. To answer this question for prohibited AI systems, you have to read back into the recitals. Recital 44 seemingly addresses the point, saying: 👉 "There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions...", 👉 because "AI systems identifying or inferring emotions ... on the basis of their biometric data may lead to discriminatory outcomes", 👉 noting that "such systems" could lead to detrimental or unfavourable treatment of individuals and 👉 "Therefore, ... the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited". Stringing this all together, it seems the intention is only to prohibit emotional inference in the workplace on the basis of biometrics - again, an 'emotion recognition system'. (I'd also talk about some language in Recital 54 too but LinkedIn character limits prevent me from going into that.) In fact, for reasons unknown, the defined term 'emotion recognition system' is used in only 4 places in the Act (2x in the recitals, 1x in the definition, and 1x in Art 50), while references to emotional inference are used more broadly, introducing unhelpful ambiguity. Why not just use the defined term? It's been emotional indeed.

    • No alternative text description for this image
  • View organization page for Digiphile, graphic

    3,016 followers

    In our previous post, we explained the purpose and key impacts of #NIS2 for organisations (https://lnkd.in/eMkkWt8C). Today, let’s look at which entities fall under the scope of its provisions. 💯 Essential and Important Entities 💯 Entities that fall within the scope of #NIS2 are divided into two categories: ‘Essential Entities’ and ‘Important Entities’.  This categorisation is based upon the criticality of their sector, the type of service they provide, and their size. 🌓 What’s the difference between the two categories? 🌓 Both 'Essential Entities' and 'Important Entities' are subject to the same cybersecurity standards and the same cyber-incident reporting requirements (which we will analyse in more detail in our next post).  So the question is: why distinguish between these two categories? Let’s have a look at who qualifies as an Essential Entity or an Important Entity and why that distinction matters…

Similar pages