05 Jun 2023

Data Protection update - May 2023


Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from May 2023.

The month of May marked the fifth anniversary of the EU GDPR and it was commemorated with a bang.

Just days before the GDPR’s official birthday, Meta was served a record €1.2 billion fine for data protection breaches. The fine, the largest ever imposed under the GDPR, came after Ireland’s Data Protection Commission found the tech giant had violated the law by transferring personal data of EU Facebook users to the US without appropriate safeguards. 

Meta has six months to remediate the unlawful processing, including storage, in the US of personal data of European users. Andrea Jelinek, chair of the European Data Protection Board, said the “unprecedented fine” sends a “strong signal to organisations that serious infringements have far-reaching consequences.”

Still, Meta hasn’t taken the decision lying down. In a statement, the tech company vowed to appeal the ruling, which it says could have implications for thousands of businesses which rely on the ability to transfer data between the EU and US in order to operate.

May also saw the Court of Justice of the European Union hand down four pivotal preliminary rulings related to the application of the EU GDPR. The rulings clarified the law in relation to four legal issues: the accountability principle, the right of access under Article 15 EU GDPR, compensation under Article 82 EU GDPR and joint controllers.

In this month’s issue:

Data Protection

Artificial Intelligence

Cyber Security

Enforcement and Civil Litigation

Data Protection

Meta receives largest GDPR fine to date

On 22 May, the Irish Data Protection Commission ("DPC") announced that Meta has been fined €1.2 billion – the largest fine to date issued under the EU General Data Protection Regulation ("EU GDPR").

The DPC's decision against Meta has three parts:

  1. An order requiring Meta to suspend future transfers of personal data to the US within the period of five months from the date of the decision.
  2. A fine of €1.2 billion.
  3. An order requiring Meta to bring its processing operations into compliance with the EU GDPR by ceasing the unlawful processing, including storage, in the US of personal data of EU/EEA users transferred in violation of the EU GDPR, within 6 months of the decision.

With the EU-US draft adequacy agreement still not in place (the European Parliament voted against the proposed agreement in a non-binding resolution earlier in May), the DPC's decision lands Meta's US-EU data transfers in a difficult, uncertain position. The decision also has profound ramifications for anyone transferring personal data to the US under the EU GDPR, as it demonstrates that it may be very difficult to do so lawfully under any of existing legal mechanisms and derogations, in light of the incompatibility of US law with European fundamental rights. The issue is especially difficult for transfers to any electronic communications service provider (such as Meta) that may be required to hand over European data to US national security agencies under the US federal law FISA.

For further analysis of the DPC's decision and what it means for any business making overseas transfers, look out for our upcoming Insight deep dive on our data protection hub.

CJEU hands down four preliminary rulings on GDPR

On 4 May, the Court of Justice of the European Union ("CJEU") handed down four preliminary rulings relating to the application of the EU GDPR.

The CJEU considered:

  • The accountability principle. The CJEU found that a controller's failure to comply with Article 26 (the obligations of joint controllers) and Article 30 (the obligation to maintain a record of processing) EU GDPR did not in itself constitute unlawful processing.
  • The right of access. Data subjects' rights to copies of their personal data under Article 15(3) EU GDPR includes the rights to copies of documents if essential to the subject exercising its rights.
  • Compensation. Non-material damage need not meet a certain threshold of seriousness in order to confer a right to compensation.
  • Joint controllers. The CJEU explained that joint controllership can exist without a formal agreement or common decision between controllers.

For more information on these decisions, read our Insight.

UK government scales back Retained EU Law Bill

On 15 May, the UK government announced that it is scaling back the Retained EU Law (Revocation and Reform) Bill ("REUL Bill"). The government provided a revised list outlining which pieces of legislation are being revoked with justifications provided for each.

Since Brexit, over 1,000 EU laws have been revoked or reformed in the UK. The REUL Bill will revoke a further 600 laws, in addition to the 500 pieces of legislation that will be revoked by the Financial Services and Markets Bill and the Procurement Bill. The government justifies this decision by stating that it will lighten the regulatory burden for businesses and encourage economic growth.

This decision reflects a scaled down promise in contrast to the government's initial plans to scrap thousands of EU laws by the end of this year. However, in its press release, the government outlined plans to continue reviewing remaining EU laws in order to identify further opportunities for reform. The REUL Bill creates a mechanism that enables this ongoing aim of revoking EU law.

Some minor pieces of data protection legislation will be revoked by the REUL bill, such as the Data Retention and Acquisition Regulations 2018. However, more significantly, the government has stated that it will remove the current interpretive principles and the structure providing for the supremacy of all EU law. This means UK courts could be permitted to overrule EU precedents and there will be significant uncertainty as to how to interpret terms from retained EU laws. In the context of data protection, there may be uncertainty as to the supremacy and interpretation of the UK General Data Protection Regulation ("UK GDPR").

The REUL Bill will return to the House of Commons after the House of Lords concludes its debate.

Stay tuned for further updates on how post-Brexit regulatory reform will affect data protection in the UK.

Data Protection and Digital Information (No. 2) Bill debated at Committee stage

On 17 April, the Data Protection and Digital Information (No. 2) Bill ("DPDI Bill") had its second reading in the House of Commons. This provided us with our first opportunity to hear what MPs had to say about the DPDI Bill. Their primary concerns surrounded retaining its adequacy with the EU and the struggle to balance the interests of big tech and consumers. For more information on the second reading, read our Insight.

Following this, the DPDI Bill moved to Committee stage. This stage involves a House of Commons committee hearing evidence and conducting a detailed examination of a bill. On 10 May, a House of Commons committee heard evidence from 23 witnesses. John Edwards, the UK Information Commissioner, was among those providing evidence.

Edwards assisted the committee with a forensic analysis of the wording of the DPDI Bill. He outlined that the use of phrases such as 'high-risk activities' does not provide decision-makers with sufficient clarity when interpreting legislation. Edwards argued that the ICO and other decision-makers would appreciate further, clear criteria to assist them with issuing guidance and interpreting the legislation. Removing as much uncertainty as possible from the DPDI Bill should be the aim as this will enable greater efficiency. Edwards also outlined his concerns surrounding the future role of ministers with the current DPDI Bill providing scope for ministers to overrule the ICO and refuse to publish its statutory codes, threatening to undermine the independence of the ICO.

Other witnesses expressed concerns relating to the DPDI Bill's provisions on automated decision-making and its impact on the UK retaining adequacy with the EU.

The DPDI Bill will now move to its third reading, representing the House of Commons' final chance to debate the contents of the bill and vote on its approval. If approved, the DPDI Bill will move to the House of Lords for consideration.

Five leading EU tech groups call for amendment to proposed EU Data Act

On 4 May, leaders of some of Europe's largest technology companies wrote to the European Commission outlining their concerns regarding the EU's forthcoming Data Act.

As we previously reported, the Data Act will bring in a new landscape for data access, data portability and data sharing. It includes provisions that introduce common rules on the sharing of data generated by connected products or related services and will compel data holders to make data available to public bodies without undue delay where there is an exceptional need for the public body to use the data. The European Commission are adamant that the Data Act will ensure fairness in the digital environment, stimulate a competitive data market, open opportunities for data-driven innovation and make data more accessible for all.

However, the concerns raised in this letter from the technology companies suggest that not all stakeholders agree on whether the Data Act is on track to achieve its aims. The letter was organised by DigitalEurope and is signed by chief executives of Siemens, Siemens Healthineers, SAP, Brainlab and Datev. The letter expressed concerns around supporting European competitiveness and protecting businesses against cyber attacks and data breaches. The letter outlined three key concerns:

  1. The Data Act provides for insufficient safeguards regarding the security of trade secrets, cybersecurity and health and safety. The letter argues that this lack of safeguarding risks Europe's competitiveness and resilience against hybrid threats. Stefan Vilsmeier, CEO of Brainlab, argues that the Data Act will weaken Europe's economy and impact its ability to compete with China by forcing companies to reveal an unprecedented level of insight into related business practices and value chains.
  2. The Data Act's provisions on business-to-government data sharing are vague. This increases the risk of data breaches and the misuse of data.
  3. Regarding cloud switching (the moving of data between cloud platforms), the Data Act moves against contractual freedom, limiting customers' power to access the best deals possible.

Executives at SAP say that they welcome the objectives of the Data Act to create a common EU regulatory framework and facilitate data sharing. However, they insist that the Data Act needs further amendments in order to preserve contractual freedom, allowing providers and customers to agree on terms that reflect business needs.

The letter asks the European Commission to pause the process, enabling changes to the proposed Act. Time will tell whether the Data Act will be further delayed in the face of these concerns. The Swedish presidency entered into negotiations (or 'trilogue') with the European Parliament on the final version of the Data Act in March and further trilogues are expected to take place in May and beyond.

ICO issues new guidance on employee DSARs

The ICO, the UK Data Protection Authority ("DPA"), issued new guidance for businesses and employers on Employee Data Subject Access Requests ("DSARs").

Data subjects have the right of access under the EU GDPR, meaning they can request a copy of their personal information from organisations. This is a right often exercised by employees against their employers or former employers. Employees can request any personal data held by the employer, such as attendance details, sickness records or personal development and other HR records. The ICO reported in its press release that it received 15,848 complaints relating to DSARs between April 2022 and March 2023. In light of this, it has now released new, enhanced guidance on how employers should respond to DSARs.

The new guidance covers key issues, including the following points:

  • A data subject's right to obtain a copy of their personal data cannot be overridden by a settlement or non-disclosure agreement. If any settlement agreement attempts to waive an employee's right of access, it is likely that this element of the agreement will be unenforceable under data protection legislation.
  • Emails that an employee is merely copied into may in some circumstances be disclosable in a DSAR. Data subjects are only entitled to personal data relating to them, but this may well be contained in emails that also discuss business matters. An exercise must be carried out to determine whether some or all of such emails must be disclosed in order to comply with the DSAR.
  • Searches must also be carried out across social media channels if an employer uses such channels for business purposes, as in these contexts the employer will be a controller of the information processed on those pages.
  • Though data subjects may use a DSAR to gather evidence for an ongoing grievance or tribunal process, this does not provide employers with grounds to refuse to comply with the DSAR.
  • A DSAR may be manifestly unfounded if the data subject clearly has no intention to exercise their right of access or if the request is malicious in intent. Malicious intent can be inferred from the subject making unsubstantiated accusations against the organisation or targeting specific employees against whom the subject has a personal grudge.

For more information, you can access the ICO's full guidance here.

Artificial Intelligence

UK Government set to chart an alternative approach to AI regulation

In the midst of growing anxiety across the tech industry about the potential impact of AI, and some more stark warnings from industry experts including George Hinton (so called "godfather of AI") that the recent rapid development in the capabilities of AI may pose an existential risk to humankind unless urgent action is taken, Prime Minister Rishi Sunak appears to be contemplating charting an alternative approach to the UK's regulation of AI, with reports that the government is considering tighter regulation and talk of a new global regulator (or at least the creation of a new UK AI-focused watchdog).

Back in March, we reported that the UK Government had published a white paper outlining its plans to regulate AI (the "AI White Paper").  The government's intention was for the AI White Paper to foster a pro-innovation approach to AI regulation which focusses on its benefits and potential whilst avoiding unnecessary burdens to business and economic growth.  The AI White Paper is currently open for consultation, which is set to conclude on 21 June, although industry figures have warned that the AI White Paper is now already out of date.

The government may concede that there has been a shift in its approach since the AI White Paper was published, with reports of government insiders insisting that they "want to stay nimble, because the technology is changing so fast", and expressing their wish to avoid the product-by-product regulatory regime, such as the one that is envisaged by the EU's AI Act.

It appears that Sunak may also be applying pressure on the UK's allies, seeking to construct an international agreement in relation to how to develop AI capabilities, which could entail the establishment of a global regulator.  Given that the EU has been unable to reach an agreement since the draft AI Act was published over two years ago, Sunak's plan to formulate and subsequently agree such an international agreement in a short period of time appears somewhat optimistic.

Domestically, MPs from both the Conservative and Labour party are calling for an AI bill to be passed, which might set certain conditions for companies seeking to create and develop AI in the UK and lead to the creation of a UK regulator.  It remains to be seen what approach the government will take to regulating AI in the UK and what aspiration it has to lead on such regulation on the global stage.

Biden Administration announces measures to promote responsible American AI development

Over in the US, American lawmakers are arguing that federal regulation of AI is necessary for innovation.  Speaking at an event in Washington, DC, on 9 May, US Representative Jay Obernolte said that regulation to mitigate potential harms and provide customer protection is something which "is very clearly necessary when it comes to AI."  Obernolte further stressed that regulation of data privacy and AI must coincide, given the vast amounts of information AI models require to learn and AI's ability to pierce digital data privacy, reaggregate personal data and build behavioural models to predict and influence behaviour. 

In early May, the Biden Administration (the "Administration") announced new actions which it says are set to further promote responsible American innovation in AI as well as protect people's rights and safety.  Emphasising the need to place people and communication at the centre of AI development, by supporting responsible innovation that serves the public good, the Administration said that companies have a fundamental responsibility to ensure that the products they provide are safe prior to deployment for public use.

The Administration has also announced an independent commitment from leading AI developers including Google, Microsoft, NVIDIA and OpenAI to participate in a thorough public evaluation of AI systems. These actions all contribute to a broader and ongoing effort for the Administration to engage with a variety of stakeholders on critical AI issues. 

Australian lawmakers warned by Microsoft that government-only AI response would be "unworkable"

Belinda Dennett, Microsoft Corporate Affairs Director, spoke to members of Australia's parliament at a parliamentary hearing on 3 May, to communicate her view that the government should collaborate with industry and society on principles-based measures or co-regulation with regard to AI, rather than taking a more targeted and direct regulatory response. 

Dennett's comments reflect Microsoft's view that there is a risk in seeking to regulate what is known today in relation to generative AI technologies, as that knowledge can rapidly go out of date.  The effect of this risk is such that any policy seeking to regulate generative AI would soon find itself trailing behind the development of the technology being regulated.

In making her remarks, Dennett specifically referred to the recent rapid enhancement in the capabilities of generative AI technologies such as ChatGPT and explained that "this was innovation we weren't expecting for another ten years."  Dennett also praised calls which have been made for summits and various other discussions around the generative AI boom on the basis that, for AI, "society needs to decide where those guardrails should be." 

Microsoft's comments come as Australia joins other jurisdictions needing to act quickly to determine how best to regulate AI and generative AI in particular, which we considered in our April 2023 bulletin

Cyber security

Cybersecurity industry to be reshaped by former Uber security officer's sentencing

In October 2022, Joseph Sullivan, Uber Technologies' former security chief, was convicted of obstruction of a federal proceeding and of concealing or failing to report a felony.  Sullivan's conviction arose in connection with a 2016 cyber breach that affected 57 million Uber drivers and riders.  In response to the breach, Sullivan devised a scheme by which the hackers who had breached Uber's network were paid $100,000 through the company's 'bug bounty' scheme and were induced into signing a non-disclosure agreement, such that Uber's legal team and the US Federal Trade Commission officials would not find out. 

Sentenced in early May, Sullivan was handed a three-year term of probation and ordered to pay a fine of $50,000. Although Sullivan has avoided time in prison, US District Judge William Orrick made clear that if he were to preside over a similar case in the future "even if the character is that of Pope Francis, they should expect custody."  Sullivan's case illustrates that corporate information security officers ("CISOs") should work with lawyers to establish whether a breach has occurred and whether it should be reported. It has also accelerated a transition whereby CISOs report breaches more directly to their organisation's senior executives.  

Consequently, companies should now be reconsidering their processes for breach identification and the documentation of decisions regarding breaches in order to develop more robust breach response procedures.  This will allow companies to cultivate a culture of shared responsibility for taking decisions associated with cybersecurity breaches, which will, in turn, assist CISOs with avoiding personal liability.    

EU lawmakers' inquiry committee says spyware use must be restricted

Following a year-long inquiry into the abuse of spyware in the EU, the European Parliament's Committee of Inquiry has adopted its final report and recommendations.  The inquiry investigated the use of surveillance spyware such as "Pegasus", which can be covertly installed on mobile phones and is capable of reading text messages, location tracking, accessing the device's phone and camera as well as harvesting information from apps.

MEPs stated that the use of spyware in Hungary constitutes "part of a calculated and strategic campaign to destroy media freedom and freedom of expression by the government", and in Poland the use of Pegasus has been part of "a system for the surveillance of the opposition and critics of the government – designed to keep the ruling majority and the government in power".  To remedy these major violations of EU law, the MEPs called on Hungary and Poland to comply with European Court of Human Rights ("ECHR") judgments, restore judicial independence and oversight institutions as well as launch credible investigations into abuse cases to help ensure citizens have access to proper legal redress.  In Greece, where spyware "does not seem to be part of an integral authoritarian strategy, but rather a tool used on an ad hoc basis for political and financial gains", MEPs called on the government to repeal export licences that are not in line with EU export control legislation.  Elsewhere across the EU in Spain, although the country has "an independent justice system with sufficient safeguards", MEPs called on Spanish authorities to ensure "full, fair and effective" investigations.    

In order that illicit spyware practices are stopped immediately, MEPs recommended that spyware should only ever be used by member states in which allegations of spyware abuse have been thoroughly investigated, national legislation is in line with recommendations of the Venice Commission and CJEU and ECHR case law, Europol is involved in investigations, and export licences not in line with export controls are repealed.  MEPs further recommended that the Commission should assess whether these conditions are met by member states by December 2023.  In order to prevent attempts to justify abuses, the MEPs also called for a common legal definition of 'national security' as grounds for surveillance. 

MEPs adopted the report and recommendations and the text outlining the recommendations is expected to be voted on by the full Parliament during the plenary session starting on 12 June. 

Toyota reveals decade long vehicle data exposure

In a statement released earlier this month, Toyota Motor Corporation ("Toyota") confirmed that a human error rendered the vehicle data of around 2.15 million customers publicly accessible in a period spanning almost a decade from November 2013 to April 2023.

The incident, which Toyota states was caused by a "misconfiguration of the cloud environment", as a result of the cloud system having been accidentally set to public rather than private, meant that data including vehicle identification numbers and vehicle location data was potentially accessible by the public.  Toyota have said that the accessible data alone was not sufficient to enable identification of the affected data subject and that there had been no reports of malicious use of the data. 

Although it has confirmed that the data in question is confined to that of its Japanese customers, the number of potentially affected customers constitutes almost the entirety of Toyota's customer base who had signed up for its main cloud service platforms since 2012, which are essential to its autonomous driving and other AI-based offerings.  Affected customers include those who use the T-Connect service, which provides a range of services such as AI-voice driving assistance, and also users of G-Link, which is a similar service for owners of Lexus vehicles.

The incident had only recently been discovered by Toyota as it targets an expansion of its connectivity services.  Toyota said that the "lack of active detection mechanisms, and activities to detect the presence or absence of things that become public" was the cause of the failure to identify the issue earlier.  Toyota has stated that it will take a series of measures to prevent a recurrence of the incident including implementing a system to audit cloud settings, establishing a system to continuously monitor settings and educating employees on data handling rules.

The Japanese Personal Information Protection Commission has been informed of the incident but has not provided comment at this stage.  However, the Japanese automaker subsequently announced that customer information in some countries throughout Oceania and Asia may also have been left publicly accessible from October 2016 to May 2023.  In this instance, potentially leaked customer data may include names, addresses, phone numbers and email addresses.

You can read Toyota's statement in Japanese here.

Enforcement and civil litigation

UK High Court rejects Google/DeepMind data-related class action

The High Court has brought Prismall v Google and DeepMind to an early conclusion, ruling that Andrew Prismall and the 1.6 million class members he represents cannot go to trial.

Andrew Prismall sued Google and DeepMind under the tort of misuse of private information on behalf of 1.6 million NHS patients after, in 2016, it was revealed that DeepMind transferred the patients' data without their knowledge or consent. To make this claim, Prismall was required to show that the class of patients had a reasonable expectation of privacy and that DeepMind deliberately and without justification obtained and used the data. Prismall also had to show that all members of the class had the same interest. This follows the principle set out in Lloyd v Google that a representative action cannot succeed if it requires an individualised assessment of class members' loss.

Prismall argued that, without needing an individualised assessment, he could show that each class member had a reasonable expectation of privacy in relation to the relevant personal data, this expectation was unjustifiably interfered with and such interference entitled them to an award of more than trivial damages. However, the court ruled that there was no realistic prospect of the class members meeting these requirements. The court found that:

  • Each member of the class did not have a realistic prospect of demonstrating a reasonable expectation of privacy in relation to the medical records. To show that each class member had the same interest, Prismall advanced his case based on a 'minimum scenario'. This meant his case ignored any special circumstances of each class member and, for example, assumed every class member attended the hospital only once or that only limited information was recorded about the patient by the hospital. This minimum scenario meant that the Claimant's argument did not pass the de minimis threshold for showing a reasonable expectation of privacy.
  • The class members did not have a viable claim for more than trivial damages for the loss of control of their personal data.
  • There was no other compelling reason to permit the claim to proceed to trial. Any information that the Claimant would provide before trial was deemed unlikely to affect the merits of his case.
  • The issues with Prismall's arguments were inherent to the claim and, as such, Prismall would not be permitted to amend and continue his case. Prismall was also refused permission to appeal by the High Court.

Mrs Justice Williams struck out the case and ruled that a summary judgment should be entered in favour of Google and DeepMind.

The case was one of the few opt-out class actions that continued after the Lloyd v Google ruling narrowed the options for bringing such claims under the UK GDPR. It appears that misuse of private information was not a viable alternative in this case.

For more information, you can access the full judgment here.

Belgian DPA stops US FATCA transfers

A Belgian data subject complained to the Belgian DPA after being informed of his obligations under the US Foreign Account Tax Compliance Act ("FATCA") by his bank. The Belgian DPA has now ordered Belgium's Federal Public Service Finance to stop processing complainants' data in relation to FATCA transfers, arguing that such transfers breach the EU GDPR.

FATCA's aim is to combat tax fraud, money laundering and tax evasion. 87 countries have entered FATCA agreements with the US. Under FATCA, non-US banks must send information about any accounts held by American citizens to the corresponding non-US government, who then shares the information with the US Internal Revenue Service (the "IRS"). This information constitutes personal data under the EU GDPR.

The Belgian DPA originally decided that the FATCA transfers did not breach the EU GDPR and that Schrems II did not apply. However, the Belgian DPA's litigation arm disagreed. It found that data subjects are not able to understand the purposes of processing in relation to FATCA transfers and concluded that FATCA transfers breach the EU GDPR's purpose limitation, data minimisation and proportionality principles. The IRS failed to carry out a data protection impact assessment in relation to the transfers. In addition, the FATCA transfers were found not to be subject to appropriate safeguards. As a result, the Belgian DPA ordered that transfers of personal data to the US under the FATCA regime must cease.

This does not represent the only challenge to FATCA. A US-born data subject now residing in the UK has complained to the High Court that FATCA transfers are disproportionate and breach her rights under the EU GDPR. However, the impact of ceasing FATCA transfers is questionable. American Citizens Abroad, a non-profit organisation, commented that the Belgian DPA decision will not get rid of US tax problems for expats. It argued that the IRS has an obligation to enforce US tax laws and if the required information cannot be provided via FATCA transfers, it will come to light another way.

US Federal Trade Commission accuses Meta of violating its $5 billion Cambridge Analytica settlement

The US Federal Trade Commission ("FTC") filed a complaint against Meta in 2011, resulting in a 2012 privacy order barring Meta from misrepresenting its privacy practices. After a subsequent complaint from the FTC, relating to Meta's misrepresentations that fed into the Cambridge Analytica scandal, Meta agreed to another privacy order in 2020. This 2020 order compelled Meta to pay a $5 billion penalty.

In a press release dated 3 May, the FTC claims that Meta has now violated the privacy promises that it made in the 2020 privacy order. The FTC's claim is based on the following points:

  • An independent assessor identified weaknesses in Meta's privacy practices.
  • Meta gives app developers access to users' personal data, despite earlier promises to not share personal data with apps that users had not used for 90 days.
  • From late 2017 to mid-2019, Meta is accused of misrepresenting that parents could control with whom their children communicated through Facebook Messenger.

As a result, the FTC proposes to make the following changes and extensions to the privacy order:

  • Meta will be prohibited from profiting from the data it collects (including via virtual reality products) from users under the age of 18.
  • Meta will be prohibited from launching new products and services without confirmation from its assessor that Meta's privacy practices are in full compliance with the FTC's order.
  • Compliance requirements will be extended to any companies that Meta merges with or acquires.
  • Meta's future use of facial recognition technology will be limited. Under this provision, Meta would be required to disclose and obtain users' consent for the future use of this technology.
  • Further measures including more in-depth privacy reviews and reporting obligations, in addition to compelling Meta to provide additional protections for users.

The FTC has requested that Meta responds to these claims within 30 days. Meta have pledged to robustly fight this action, labelling it a political stunt.

DPAs take action against Clearview AI

May saw the latest enforcement action against Clearview AI, following numerous recent sanctions against the facial recognition platform.

On 9 May, the Austrian DPA found that Clearview AI was not complying with the EU GDPR. Following a request for access, a data subject found that their image data had been processed by Clearview AI. The Austrian DPA found that Clearview AI processed the personal data without lawfulness, fairness and transparency and was in breach of data retention rules by permanently storing data. In addition, Clearview AI's processing of the data served a different purpose from the original publication of the data subject's personal data. The Austrian DPA ordered Clearview AI to erase the complainant's personal data and to designate a representative in the EU.

In another decision handed down in May, the Australian Administrative Appeals Tribunal ruled that Clearview AI's collection of Australian facial images without consent breached the country's privacy standards. As a result, the Australian authority ordered Clearview AI to leave the country and delete all Australian images that it had gathered.

This follows action taken against Clearview AI in April. The French DPA fined Clearview AI €5.2 million for its failure to comply with the DPA's earlier order to stop collecting and processing personal data of individuals located in France.

This wave of enforcement action reflects the ongoing battle of applying data protection requirements to ever-evolving AI technologies.

Gatekeeper designation process under EU Digital Markets Act kicks in

We reported in March that Marc Van der Woude, president of the EU's General Court, warned that a wave of Digital Markets Act ("DMA") litigation was looming. The DMA places obligations on Big Tech platforms (referred to as "Gatekeepers") to create a fairer environment for business users and to ensure that consumers can access better services and easily switch providers.

The first step of the DMA's implementation kicked off on 2 May. This step looks into the classification of certain platforms as Gatekeepers. Any platforms labelled with this designation will be prohibited from certain behaviours and practices. Three main criteria are involved in deciding whether a platform is a Gatekeeper:

  • The organisation has a size that impacts the internal market, judged on annual turnover within the EEA and whether the organisation provides a core platform service in at least three EU member states.
  • The organisation controls an important gateway for business users towards final customers. The threshold to meet here is providing a core platform service to more than 45 million monthly active end users located or established within the EU and to more than 10,000 yearly active business users established in the EU.
  • The organisation has an entrenched and durable position in the market. This is relevant where the organisation has not met the second criterion during the last three years.

Any organisations labelled as Gatekeepers will be subject to the DMA's list of dos and don’ts. For example, Gatekeepers must not prevent consumers from linking up to businesses outside the Gatekeeper's platform or prevent users from uninstalling any pre-installed software or app if they wish to.

By 3 July, potential Gatekeepers must notify their core platform services to the European Commission if they meet the DMA's thresholds. The European Commission, following such notifications, has 45 working days to assess whether the organisation is a Gatekeeper. Any designated Gatekeepers will then have 6 months to comply with the DMA's requirements.

Round-up of notable enforcement actions

Each month, we bring you a round-up of notable data protection enforcement actions.





Meta Ireland

Irish DPA

€1.2 billion

See our coverage of the Irish DPA's decision above.


Spanish DPA


GSMA failed to carry out a data protection impact assessment in relation to a data subject's special category data.

B2 Kapital

Croatian DPA

€2.26 million

Representing the Croatian DPA's highest EU GDPR fine to date, B2 Kapital were fined for failing to prevent data security breaches.

Clearview AI

French DPA

€5.2 million

See our coverage of Clearview AI's fine above.


French DPA


Doctissimo was fined for excessive data retention, failing to obtain data subjects' consents to data collection, and for unlawful use of cookies.