EU’s ChatGPT : taskforce offers first look at detangling the AI chatbot’s privacy compliance

EU’s ChatGPT : taskforce offers first look at detangling the AI chatbot’s privacy compliance

chatgpt

After over a year of discussion, a data protection taskforce published its findings on Friday, examining how OpenAI’s ChatGPT, a very popular chatbot, relates to the EU’s privacy laws. The critical point is that the working group of privacy enforcers is still divided on important legal matters, such as whether OpenAI’s processing is fair and lawful.

The matter is significant since penalties for verified breaches of the bloc’s privacy code may amount to 4% of worldwide yearly revenue. Organizations have the authority to interrupt processing that isn’t compliant. Thus, OpenAI is theoretically exposed to significant legal risk when few laws specifically address AI (and even in the EU’s case, such rules are still years away from becoming fully operational).

However, it’s safe to think that OpenAI will feel empowered to carry on with business as usual despite the growing number of complaints that its technology violates various provisions of the EU’s General Data Protection Regulation (GDPR), particularly in the absence of clarification from EU data protection enforcers regarding how current data protection laws apply to ChatGPT.

For instance, a complaint that the chatbot created personal information about a person and was not ready to fix led to the opening of an inquiry by Poland’s data protection authority (DPA). Recently, a comparable complaint was made in Austria. 

Lots of GDPR complaints, a lot less enforcement

On paper, the GDPR applies whenever personal data is collected and processed — something large language models (LLMs) like OpenAI’s GPT, the AI model behind ChatGPT, are demonstrably doing at a vast scale when they scrape data off the public internet to train their models, including by siphoning people’s posts off social media platforms.

The EU regulation also empowers DPAs to order any non-compliant processing to stop. This could be a mighty lever for shaping how the AI giant behind ChatGPT can operate in the region if GDPR enforcers choose to pull it.

Indeed, we saw a glimpse of this last year when Italy’s privacy watchdog hit OpenAI with a temporary ban on processing the data of local users of ChatGPT. The action, taken using emergency powers contained in the GDPR, led to the AI giant briefly shutting down the service in the country.

A preview of this was given to us last year when OpenAI was temporarily barred from processing the data of ChatGPT users in Italy by the country’s privacy authorities. The AI powerhouse temporarily suspended the nation’s service due to the move carried out under the GDPR’s emergency powers. 

It was when OpenAI modified the data and controls it gave users in response to a set of requests from the DPA that ChatGPT was able to restart in Italy. However, the Italian inquiry into the chatbot is still ongoing. It covers essential topics such as the legal justification that OpenAI cites for using user data to train its AI models in the first place. In the EU, the instrument remains buried in legal uncertainty.

Any organization that wants to process personal data about individuals must have a legal basis for doing so under the GDPR. The legislation lists six potential bases; however, the majority are unavailable in the context of OpenAI. Furthermore, the Italian DPA has previously told the AI powerhouse that it cannot depend on asserting a contractual requirement to handle people’s data. 

There are only two possible legal bases left for the AI giant to use: either consent (i.e., asking users for permission to use their data) or a broad basis called legitimate interests (LI), which requires a balancing test and requires the controller to allow users to object to the processing. Additionally, the Italian DPA has already instructed the AI giant that it cannot rely on claiming a contractual necessity to process people’s data to train its AIs.

 

OpenAI has shifted to asserting that it has an LI for processing personal data for model training after Italy became involved. On the other hand, OpenAI was found to have broken the GDPR in the DPA’s draft conclusion from its inquiry in January. We have yet to see the authorities’ complete evaluation of the legal foundation point, even though no specifics of the draft conclusions have been made public. Regarding the complaint, a final ruling is still pending. 

A precision ‘fix’ for ChatGPT’s lawfulness?

The task force’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data, pre-processing of the data (such as filtering), training itself, prompts and ChatGPT outputs; and any training on ChatGPT prompts.

The first three of the listed stages carry what the task force couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes that scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views, etc., which requires an even higher legal bar for processing than general personal data.

On unique category data, the task force also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data. (“To rely on the exception laid down in Article 9(2)(e) GDPR, it is important to ascertain whether the data subject had intended, explicitly and by a clear affirmative action, to make the personal data in question accessible to the general public,” it writes on this.)

 

To rely on LI as its legal basis in general, OpenAI needs to demonstrate it needs to process the data; the processing should also be limited to what is necessary for this need, and it must undertake a balancing test, weighing its legitimate interests in the processing against the rights and freedoms of the data subjects (i.e. people the data is about).

Here, the task force has another suggestion, writing that “adequate safeguards” — such as “technical measures”, defining “precise collection criteria” and blocking out specific data categories or sources (like social media profiles), to allow for less data to be collected in the first place to reduce impacts on individuals — could “change the balancing test in favour of the controller”, as it puts it.

This approach could force AI companies to care more about how and what data they collect to limit privacy risks.

“Furthermore, measures should be in place to delete or anonymise personal data collected via web scraping before the training stage,” the task force also suggests.

 

OpenAI is also seeking to rely on LI to process ChatGPT users’ prompt data for model training. On this, the report emphasizes the need for users to be “clearly and demonstrably informed” that such content may be used for training purposes — noting this is one of the factors that would be considered in the balancing test for LI.

 

It will be up to the individual DPAs assessing complaints to decide if the AI giant has fulfilled the requirements actually to be able to rely on LI. If it can’t, ChatGPT’s maker would be left with only one legal option in the EU: asking citizens for consent. And given how many people’s data is likely contained in training data sets, it’s unclear how workable that would be. (Deals, the AI giant, is fast cutting with news publishers to license their journalism; meanwhile, it wouldn’t translate into a template for licensing European data as the law doesn’t allow people to sell their consent; consent must be freely given.)

Fairness & transparency aren’t optional

Elsewhere, on the GDPR’s fairness principle, the task force’s report stresses that privacy risk cannot be transferred to the user by embedding a clause in T&Cs that “data subjects are responsible for their chat inputs”.

 

“OpenAI remains responsible for complying with the GDPR and should not argue that the input of certain personal data was prohibited in the first place,” it adds.

On transparency obligations, the task force appears to accept OpenAI could make use of an exemption (GDPR Article 14(5)(b)) to notify individuals about data collected about them, given the scale of the web scraping involved in acquiring data sets to train LLMs. However, its report reiterates the “particular importance” of informing users that their input may be used for training purposes.

The report also touches on the issue of ChatGPT ‘hallucinating’ (making information up), warning that the GDPR “principle of data accuracy must be complied with” — and emphasizing the need for OpenAI to, therefore, provide “proper information” on the “probabilistic output” of the chatbot and its “limited level of reliability”.

 

The task force also suggests OpenAI provides users with an “explicit reference” that generated text “may be biased or made up”.

 

The report describes data subject rights, such as the right to rectify personal data — which has been the focus of several GDPR complaints about ChatGPT — as “imperative” that people can easily exercise their rights. It also observes limitations in OpenAI’s current approach, including the fact it does not let users have incorrect personal information generated about them corrected but only offers to block the generation.

However, the task force does not offer clear guidance on how OpenAI can improve the “modalities” it provides users to exercise their data rights — it just makes a generic recommendation the company applies “appropriate measures designed to implement data protection principles effectively” and “necessary safeguards” to meet the requirements of the GDPR and protect the rights of data subjects”. Which sounds a lot like, ‘We don’t know how to fix this either’.

ChatGPT GDPR enforcement on ice?

The ChatGPT taskforce was established in April 2023 following Italy’s headline-grabbing intervention on OpenAI. It aims to streamline the bloc’s privacy rules regarding nascent technology. The task force operates within a regulatory body called the European Data Protection Board (EDPB), which steers the application of EU law in this area. However, it’s essential to note that DPAs remain independent and competent in enforcing the law on their patch, where GDPR enforcement is decentralized.

 

Despite DPAs’ memorable independence to enforce locally, watchdogs are nervous about how to respond to a nascent tech like ChatGPT.

 

Earlier this year, when the Italian DPA announced its draft decision, it noted that its proceeding would “take into account” the work of the EDPB task force. There are other signs that watchdogs may be more inclined to wait for the working group to weigh in with a final report—maybe in another year’s time—before wading in with their enforcement. So, the task force’s mere existence may already influence GDPR enforcement on OpenAI’s chatbot by delaying decisions and putting investigations of complaints into the slow lane.

For example, in a recent interview in local media, Poland’s data protection authority suggested its investigation into OpenAI would need to wait for the task force to complete its work.

The watchdog did not respond when we asked whether it’s delaying enforcement because of the ChatGPT taskforce’s parallel workstream. A spokesperson for the EDPB told us the task force’s work “does not prejudge the analysis that each DPA will make in their respective, ongoing investigations”. However, they added, “While DPAs are competent to enforce, the EDPB has an important role in promoting cooperation between DPAs on enforcement.”

 

There is a considerable spectrum of views among DPAs on how urgently they should act on concerns about ChatGPT. So, while Italy’s watchdog made headlines for its swift interventions last year, Ireland’s (now former) data protection commissioner, Helen Dixon, told a Bloomberg conference in 2023 that DPAs shouldn’t rush to ban ChatGPT, arguing they needed to take time to figure out “how to regulate it properly.”

It is likely no accident that OpenAI moved to set up an EU operation in Ireland last fall. The move was quietly followed, in December, by a change to its T&Cs — naming its new Irish entity, OpenAI Ireland Limited, as the regional provider of services such as ChatGPT — setting up a structure whereby the AI giant was able to apply for Ireland’s Data Protection Commission (DPC) to become its lead supervisor for GDPR oversight.

 

This regulatory-risk-focused legal restructuring appears to have paid off for OpenAI as the EDPB ChatGPT taskforce’s report suggests the company was granted primary establishment status as of February 15 this year — allowing it to take advantage of a mechanism in the GDPR called the One-Stop Shop (OSS), which means any cross border complaints arising since then will get funnelled via a lead DPA in the country of main establishment (i.e., in OpenAI’s case, Ireland).

While all this may sound pretty wonky, it means the AI company can now dodge the risk of further decentralized GDPR enforcement—like we’ve seen in Italy and Poland—as Ireland’s DPC will decide which complaints are investigated and how and when in the future.

The Irish watchdog has gained a reputation for taking a business-friendly approach to enforcing the GDPR on Big Tech. In other words, ‘Big AI’ may be next in line to benefit from Dublin’s largess in interpreting the bloc’s data protection rulebook.

OpenAI was contacted for a response to the EDPB taskforce’s preliminary report, but it did not respond at press time.