04 May 2023

Data Protection update - April 2023


Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from April 2023.

Cybersecurity was in the spotlight in April, as debtors lifted the lid on the inner workings of failed crypto exchange company FTX. The debtors' report — which identifies and discusses failures by FTX's previous management team — found the company had "grossly deprioritized and ignored cybersecurity controls" which meant it couldn’t respond to or recover from a significant cybersecurity incident, including the November 2022 breach. This, the debtors said, was "remarkable" given that FTX Group's "entire business – its assets, infrastructure, and intellectual property – consisted of computer code and technology".

The full report should serve as a cautionary tale for businesses and underscores the importance of having adequate cybersecurity controls in place. It lays bare the risks of viewing cybersecurity as an afterthought, rather than a central and important part of a business. The report found FTX Group lacked key personnel, departments and policies and had no dedicated personnel in cybersecurity. It also found that FTX Group failed to implement basic, widely accepted security controls to protect crypto assets. 

Elsewhere, cybersecurity software company Avast was fined €13.7 million by the Czech Republic's data protection authority for illegally processing customers' data. The fine, the highest GDPR fine imposed by the regulator, came after the authority found Avast had failed to sufficiently inform users about the processing of data from its then marketing analytics subsidiary Jumpshot. 

Data protection

Artificial Intelligence

Cyber security

Enforcement and civil litigation

Data protection

Second reading of the Data Protection and Digital Information (No. 2) Bill

On 17 April, the Data Protection and Digital Information (No. 2) Bill ("Bill") had its second reading in the House of Commons, providing us with the first opportunity to hear what MPs have to say about the Bill.

The MPs' debate flagged some key issues surrounding the Bill, including the importance of retaining adequacy with the EU and the difficulty in balancing the interests of big tech and consumers.

The Bill will now move to Committee stage for further examination and potential amendments.

For more information on the key topics discussed in the second reading, read our Insight.

EDPB adopts decision on Meta's data transfers

On 13 April, the European Data Protection Board ("EDPB") adopted a binding decision on Meta's data transfers to the US. The Irish Data Protection Commission has until mid-May to issue its final decision against Meta, after which the EDPB will publish its decision.

For more information about the binding decision's potential content and impact, read our Insight.

European Parliament committee rejects draft adequacy agreement between the EU and US

The European Parliament Committee on Civil Liberties, Justice and Home Affairs ("LIBE Committee") adopted a non-binding resolution (the "Resolution") that rejected the draft adequacy decision in respect of the proposed EU-US Data Protection Framework ("EU-US DPF").

This follows the Schrems II case in which the Court of Justice of the European Union invalidated the EU-US Privacy Shield, the previous framework for transferring data between the EU and US. Following that decision, there is a clear need for any future framework to be sufficient to withstand legal challenges in order to provide certainty and stability to EU businesses and citizens.

Whilst the LIBE Committee acknowledged that the draft adequacy in respect of the EU-US DPF was an improvement on the previous framework, the Resolution also states that the EU-US DPF proposals fail to create actual equivalence in the protection of EU and US citizens' privacy and does not provide for sufficient safeguards. The LIBE Committee's main points of issue with the EU-US DPF include:

  • The bulk collection of personal data is still permitted in some cases and the EU-US DPF does not make bulk data collection subject to any independent prior authorisation.
  • There are no clear rules on data retention.
  • Though the EU-US DPF establishes a Data Protection Review Court to provide a means of redress to EU data subjects, the decisions of this court would be secret and therefore would violate data subjects' right to access and rectify their personal data. In addition, the US President would have powers to dismiss the judges of this court and overrule the decisions of the court, suggesting the court would not be fully independent.

The Resolution was passed with 37 votes in favour, 0 against, and 21 abstaining. However, the Resolution is non-binding and it will remain unclear what the future is for the EU-US DPF until the European Parliament vote on the draft adequacy framework later in the Summer.

For more information, you can read the European Parliament's press release and our blog post.

EDPB releases new guidelines on reporting data breaches

In a plenary session on 28 March, the EDPB adopted several new guidelines. These guidelines concerned data breach notifications, data subjects' rights of access, and how to identify a controller or processor's supervisory authority.

The new data breach notification guidelines (the "Breach Notification Guidelines") clarified the obligations applying to organisations that process personal data of EU residents but do not have an establishment in the EEA. Here, it is the controllers, and not their representatives, that remain responsible for notifying any data breach and any data breach must be notified to each and every relevant data protection authority ("DPA"). This is a significant burden and disadvantage for non-EEA controllers.

The EDPB stated in its press release that these Breach Notification Guidelines are in line with the EU GDPR, which does not provide the usual 'one-stop-shop' system for controllers that are not established within the EEA. For EEA-established controllers undertaking cross border processing, the 'one-stop-shop' system only obliges companies to notify the DPA in the jurisdiction where their establishment is located.

During the EDPB's consultation on the Breach Notification Guidelines, stakeholders were concerned that this obligation will lead to further compliance costs for businesses. However, in consideration of the stakeholders' feedback, the EDPB stated that it will publish a contact list for data breach notifications with relevant links and accepted languages for all EEA DPAs. The aim of this list is to make it easier for controllers to identify contact points and requirements for each DPA, thus enabling easier and faster compliance with the EU GDPR.

For more information, the EDPB's press release is here.

EDPB publishes report on the consistent approach for DPAs to take on NOYB complaints

In the aftermath of Schrems II in August 2020, NOYB lodged 101 complaints with various European DPAs in relation to the use of Google Analytics and Facebook Business Tools. NOYB alleged that the use of these tools resulted in the unlawful transfer of personal data to the US. 

In response, the EDPB set up a taskforce aimed at establishing a framework for DPAs to apply a consistent approach in handling the 101 identical complaints through the analysis of steps DPAs have taken so far in relation to the complaints. The EDPB has now published its report on the outcome of the work of the task force (the "Report").

The Report outlines a controller's general obligation to examine and provide evidence that such tools can be used in compliance with data protection requirements. If a business cannot provide such evidence, it may be in breach of the EU GDPR principle of accountability. The Report also reinforces that both website operators and the providers of tools are responsible for ensuring continuous compliance with the EU GDPR. However, any degree of liability must be based on a case-by-case basis.

In its press release, the EDPB flags that the position of the DPAs expressed in the Report does not represent the position of the EDPB and will not prejudice any analysis that will be made by the DPAs when handling each complaint. The Report simply acts as a guide for DPAs to apply a consistent approach when handling these identical complaints.

For more information, you can read the Report here: EDPB's task force's report.

UK to join global CBPR forum

In April 2022, the US Department of Commerce announced the establishment of a Global Cross-Border Privacy Rules Forum ("Forum"). Susannah Storey, Director General for Digital Technologies and Telecoms, announced on 17 April 2023 that the UK has submitted its application to join the Forum.

As we reported last year, the Forum proposes an international certification system based on the existing APEC Cross Border Privacy Rules ("CBPR") and Privacy Recognition for Processors Systems. The Forum aims to support the free flow of data, facilitate trade and international data flows, and promote global cooperation by allowing organisations to demonstrate their data protection and privacy compliance to an internationally recognised standard.

The UK will be the first new jurisdiction to participate in the Forum since it was established. The original participants are the US, Canada, Japan, the Republic of Korea, the Philippines, Australia, Singapore, and Chinese Taipei. Any jurisdiction can seek membership if they agree with the principles and objectives of the Forum, have a privacy enforcement authority as a participant in the Global Cooperation Arrangement for Privacy Enforcement, and either intends to make use of a Global CBPR Forum-recognised accountability agent or demonstrates that its domestic legal system recognises the Global CBPR system.

Neema Singh, Deputy Assistant Secretary of Commerce for Services in the US, commented that the US welcomes the UK's announcement and stated that the UK's contributions to the Forum will benefit all partners in their efforts to grow the global digital economy and ensure improved access to vital government and commercial services.

This announcement came as governments, regulators and industry stakeholders gathered in London for the third bi-annual Global CBPR Forum workshop.

Artificial Intelligence

All eyes on ChatGPT, as EDPB establishes task force

The EDPB announced on 13 April that it will launch a ChatGPT task force to facilitate collaboration among EU regulators.  The announcement comes amid growing privacy and data protection concerns associated with the use of ChatGPT, and follows several announcements by authorities around the world that they intend to launch investigations or propose measures in relation to the operation of ChatGPT: 

  • ChatGPT was temporarily banned by the Italian DPA, Garante, due to concerns surrounding the unlawful collection of personal data, which you can read more about in our blog. In response, OpenAI confirmed that it is willing to collaborate in order to arrive at a positive solution. Garante requested OpenAI, the developer of ChatGPT, to adopt a series of measures by 30 April. Such measures focussed on transparency, requiring OpenAI to notify users of what data they process, details of how that information is process and the rights of users. Additionally, by 15 May, OpenAI must promote an information campaign in Italy across various forms of media in order to inform potential users about the use of personal data to train ChatGPT. It was announced on 2 May that OpenAI had complied with the Garante's measures and that the temporary ban was lifted.
  • The French DPA, CNIL, has confirmed that it has begun an investigation into a number of complaints it has received against ChatGPT, which allege that users are not informed that their name can be registered in the language model, that information generated by the model may be incorrect and that there appears to be a lack of a legal basis for data processing.
  • The Spanish DPA, AEPD, has stated that it believes a harmonised regulatory approach is required for operations which process data globally and have a significant impact on the rights of individuals.
  • The Office of the Privacy Commissioner of Canada confirmed that it has launched an investigation into OpenAI in response to a complaint it received alleging the collection, use and disclosure of personal data without consent.
  • The Privacy Commissioner of South Korea has also confirmed that the country will investigate data related issues connected to ChatGPT.
  • On a slightly different note, in Australia, OpenAI is set to face its first-ever defamation lawsuit after the whistle-blower of the country's 2009 Securency bribery scandal was falsely identified by ChatGPT as a guilty party. The whistle-blower's lawyers say is a "critical error" and "an eye-opening example of the reputational harm that can be caused by AI systems such as ChatGPT, which has been shown in this case to give inaccurate and unreliable answers disguised as fact".

Outside of ChatGPT, China's Internet regulator has proposed a series of new obligations on Chinese AI-generated content producers and service providers, making those producers and service providers responsible for the source data and requiring that AI-generated content is authentic and accurate.

As jurisdictions all over the world continue to grapple with the notion that heavy-handed legal restrictions will put the brakes on AI development and technological innovation, the EDPB's creation of the ChatGPT task force and the global regulatory response continues to raise the question of how best to regulate AI, especially that which generates content on the basis of user inputs, with a key issue being how and why users' personal data is processed. 

ICO proposes eight questions for generative AI developers and users

On 3 April 2023, the Information Commissioners Office ("ICO") published a blog titled "Generative AI: eight questions that developers and users need to ask"  advising organisations to ask – and answer – key questions concerning the privacy impact of generative AI. The blog also highlighted the need for generative AI developers and users to exercise the "design by default approach" underpinning the UK GDPR.

In its blog, the ICO explained that generative AI, such as ChatGPT-type tools, must be used with care and that "there really can be no excuse for getting the privacy implications of generative AI wrong".  The ICO itself stated that it would be asking the key questions to the relevant parties and taking action if deemed necessary.

In addition to its blog, the ICO also recently updated its guidance on AI and data protection. To find out more, please refer to our March 2023 DP Bulletin.

G7 digital meeting aims to draft first action plan on AI governance

Following a meeting held in Japan on 29 to 30 April, the G7's digital and technology ministers signed a declaration on AI governance. The meeting came in the wake of the release of an open letter signed by 12 European Parliament legislators, which called on both European Commission Present Ursula von der Leyen and US President Joe Biden to convene a "high-level global summit on artificial intelligence" to foster agreement on "a preliminary set of governing principles for the development, control and deployment of very powerful artificial intelligence".

With a growing number of countries taking an increasingly aggressive stance to regulatory enforcement against generative AI, it is no surprise that G7 ministers are addressing both the risks and potential of generative AI in the joint statement.

In its joint statement, the G7 ministers declared that AI policies and regulation should be risk-based and "forward-looking to preserve an open and enabling environment for AI development". The ministers also commented that deployment of AI should maximize the benefit of the technology for people and the planet, while mitigating its risks.

The statement additionally identified five principles for policymakers to consider when governing the use of AI and other emerging technologies. These are the rule of law, due process, democracy, respect for human rights and harnessing opportunities for innovation.

Looking to the future, the G7 leaders confirmed that a priority will be to discuss the need for shared standards designed to assess risks and provide users and companies a means of understanding what is prohibited under each regulatory regime. The leaders also spoke of the need to create guidance to ensure transparency around innovative technology.

Cyber Security

Debtors' report confirms FTX "grossly deprioritised and ignored" cyber controls, enabling $432 million hack

In November 2022, FTX was among the world's largest cryptocurrency exchanges on which users bought, sold and traded cryptocurrency.  On 9 April 2023, the first interim report of FTX's CEO was filed in Chapter 11 bankruptcy proceedings (the "Report").  In the introduction to the Report, it is explained that the challenge facing FTX's debtors in identifying the assets and liabilities of the estate was magnified by the fact that the debtors "took over amidst a massive cyberattack, itself a product of the FTX Group's lack of controls that drained approximately $432 million worth of assets on the date of the bankruptcy petition". 

On taking control of FTX, the debtors identified "extensive deficiencies" in FTX's "digital asset management, information security, and cybersecurity", which the Report notes is surprising given that the FTX Group's business and reputation depended on safeguarding crypto assets. In particular, the Report identifies a lack of key personnel, departments and policies, insufficient controls to effectively manage and protect crypto assets, a disorganised system for the management of private keys, a failure to implement basic controls relating to identity and access management, and a failure to implement appropriate controls with respect to cloud and infrastructure security.

These control failures caused FTX to expose crypto assets under its control resulting in a "grave risk of loss, misuse, and compromise". 

The Report serves as a reminder of the importance of the implementation and maintenance of effective cybersecurity controls, which are of particular importance where a business is experiencing such rapid growth and its success is dependent on appropriate cyber safeguards.

Data harvesting results in €13.7 million Czech EU GDPR fine for cybersecurity firm

Cybersecurity firm Avast has been fined €13.7 million for illegally processing its customers' personal data, which included the sale of browsing data. The illegal processing was first publicly reported by PCMag and Motherboard before Spanish non-profit NGO Facua filed a complaint with the Spanish DPA.  The matter was transferred to the Czech DPA, which supervises Avast because of its Czech-based headquarters.

In a statement published by Facua, it confirmed that it had filed the complaint with the Spanish DPA after verifying that Avast could be selling the browsing data of users, which in turn could be used to expose their identity, without their knowledge or express consent. The Czech DPA had confirmed in a statement in 2020 when the investigation commenced that there was a "suspicion of serious widespread breach of user privacy".

According to Facua, Jumpshot, a subsidiary of Avast, had been packaging data from a variety of companies from all over the world including Google and Microsoft, into different products which were sold on the basis that they could "provide companies with a more complete view of the entire online user journey". In 2020, Ondrej Vlcek, Avast's chief executive, said that Jumpshot's data collection had been "terminated" and the company's operations were to be wound down – a fact which has been stated to indicate how seriously Avast has taken the allegations.

The Czech authority has ruled that Avast has violated the EU GDPR, which requires that data subjects are properly informed about data processing, the reasons why the data was processed and the legal basis for doing so.  Avast has not disputed Facua's statement, made on its website in late March.

You can read Facua's statement in Spanish here.

Thai government agencies shaken by hacker's threat to release personal data of up to 55 million citizens

A Thai hacker threatened to expose the personal data, including national ID numbers, birthdates and addresses, of 55 million Thai citizens.  The hacker claimed that the source of the data breach was "somewhere in the government" and urged the government agencies responsible to contact them before 4pm on 5 April, failing which the data would be released.  A message on the hacker's website suggested that the hack was politically motivated, stating "Almost election, decide wisely". The hacker also stressed that the incident demonstrates how concerned the Thai government should be about cyber security and privacy issues. 

In response, the Ministry of Digital Economy and Society ("MDES") requested that Thai internet service providers block the hacker's website and that the Thai privacy regulator, the Personal Data Protection Committee, determine whether any government agency had reported a data breach.  Local Thai news have since reported that the source of the data breach was the "Mor Prom" mobile application, which was originally designed to track Covid cases but was subsequently transformed into a national online health platform which is now estimated to have over 30 million users.

The hacker subsequently announced that the planned data leak was not going ahead, citing disagreements with their "sponsor" as the reason.  Police have subsequently apprehended the perpetrator, an army officer, who is to be taken into custody after questioning. 

MDES permanent secretary Wisit Wisitsorn-at stated at an urgent meeting of all relevant government bodies that "where data is leaked or there are vulnerabilities in the information technology system, agencies must urgently improve and fix them". 

Enforcement and civil litigation

Uber and Ola ordered to provide data to drivers

In three parallel judgments, the Amsterdam Court of Appeal found that Uber and Ola (the "Companies") did not sufficiently inform drivers about how their data was processed.

The drivers brought claims, alleging that that the Companies had made automated decisions about them without sufficient human intervention. Drivers complained that the Companies made automated decisions to terminate their accounts for alleged fraudulent activity and then denied information that they requested under Article 15 of the EU GDPR.

Uber argued that there was human intervention in the decisions as humans reviewed the drivers' 'fraud scores'. However, the court rejected this argument as there was no evidence that the human intervention was more than purely a symbolic act. Therefore, the court found that the algorithmic decision-making was solely automatic and the drivers rights under the EU GDPR were infringed.

The Companies also argued that providing the drivers with the requested information and explaining their automated decision-making systems would infringe their rights to protect trade secrets. The court found that the impact of the automated decisions on the employees outweighed the Companies' rights to secrecy.

Uber and Ola were ordered to disclose information, including explaining how drivers' personal data is used in Uber's pricing systems and how Ola uses automated decision-making in drivers' earning profiles.

These outcomes act as a reminder that, under the EU GDPR, human intervention in automated decision-making must be more than merely a symbolic act or token gesture. The ICO advise that the question is whether a human reviews the decision before it is applied and has discretion to alter it, or whether they are simply applying a decision made by an automated system. Businesses also need to take care to uphold data subjects' rights to information about automated decisions.  

Austrian regulator approves the use of cookie paywalls with caveats

In 2021, NOYB submitted multiple complaints to the Austrian DPA about various German and Austrian news websites relating to cookie paywalls: where users are given a choice on a website, between buying a subscription or accessing content for free provided cookies are allowed by the user.

The Austrian DPA has now ruled that the cookie paywall operated by one Austrian news outlet, the Standard, was unlawful as it did not obtain individuals' consent to non-advertising cookies. However, the regulator confirmed that, in principle, 'pay or OK' cookie paywalls are viable. Users of websites must be able to decide whether or not to give their consent to specific data processing, but if a cookie paywall permits this then it would be acceptable under the EU GDPR.

NOYB argue that a system in which users must pay if they wish to both use a service and also not consent to their data bring processed undermines key data protection rights that the EU GDPR provides. Under the EU GDPR, data subjects can consent to the processing of their personal data, but their consent must be freely given. This means there must be genuine, ongoing control and choice over how controllers use the data. However, the regulator asserted that the EU GDPR does permit consumers to exchange data for certain services, providing that their consent to this exchange is valid. Any websites with a cookie paywall where users must give blanket consent would therefore not be compliant.

NOYB has indicated that it wishes to appeal the decision. Any appeal must be lodged by mid-May to Austria's Federal Administrative Court.

Round-up of notable enforcement actions

Each month, we will bring you a round-up of notable data protection enforcement actions.







£12.7 million

TikTok were in breach of a number of data protection laws, including failing to use children's personal data lawfully. Find out more here.

Join the Triboo



Join the Triboo Limited sent 107 million spam emails to 437,324 people in breach of the Privacy and Electronic Communications Regulations 2003 ("PECR").

Social Insurance Bank

Dutch DPA


Social Insurance Bank failed to carry out adequate identity checks when speaking to customers over the phone.

Vodafone España

Spanish DPA


Vodafone had insufficient legal basis for the processing of data, in breach of Article 6(1) EU GDPR.


Czech DPA

€13.7 million

Avast were fined for illegally processing its customers' personal data, including the sale of browsing data.