Home Technology Italy gives OpenAI a first task list for lifting the suspension order for ChatGPT

Italy gives OpenAI a first task list for lifting the suspension order for ChatGPT

by Ana Lopez
0 comment

Italy’s data protection watchdog has outlined what OpenAI must do to overturn an injunction against ChatGPT issued late last month – when it said it suspected the AI ​​chatbot service was in breach of the General Data Protection Regulation (GDPR) of the EU. and ordered the US-based company to stop processing data from local residents.

The EU’s GDPR applies when personal data is processed, and there’s no doubt that major language models like OpenAI’s GPT have sucked up massive amounts of stuff from the public internet to train their generative AI models to respond in a human way . like way to natural language prompts.

OpenAI responded to the Italian data protection authority’s order by quickly geoblocking access to ChatGPT. In a short public statement, Sam Altman, CEO of OpenAI, says so tweeted confirmation that it had stopped offering the service in Italy – this in addition to the usual Big Tech boilerplate caveat that it “thinks[s] we adhere to all privacy laws.”

The Italian Garante apparently thinks otherwise.

The short version of the regulator’s new compliance requirement is this: OpenAI must become transparent and publish an information notice describing the data processing; it should immediately introduce age restrictions to prevent minors from accessing the technology and move to more robust age verification measures; it needs to clarify the legal basis it claims for processing people’s data for training its AI (and can’t rely on the performance of a contract – meaning it has to choose between consent or legitimate interests); it must also provide users (and non-users) with ways to exercise rights over their personal data, including requesting corrections to misinformation generated about them by ChatGPT (or otherwise having their data deleted); it should also allow users to object to OpenAI processing their data for training its algorithms; and it must run a local awareness campaign to inform Italians that it is processing their information to train its AIs.

The DPA has given OpenAI a deadline – of April 30 – to get most of that done. (The local radio, TV, and Internet awareness campaign has a slightly more generous May 15 timeline to take action.)

There is also a little more time for the additional requirement to migrate from the immediately required (but weak) age-dependent child safety technology to a harder-to-circumvent age verification system. OpenAI has been given until May 31 to submit a plan to implement age-verification technology to filter out users under the age of 13 (and users aged 13 to 18 who did not receive parental consent) — with the deadline for having that more robust system set for Sept 30th.

In a press release describing what OpenAI must do to lift the temporary suspension on ChatGPT, which was ordered two weeks ago when the regulator announced it would launch a formal investigation into suspected GDPR breaches, it writes:

OpenAI will have to comply with the measures issued by the Italian SA by April 30 [supervisory authority] with regard to transparency, the right of data subjects — including users and non-users — and the legal basis of processing for algorithmic training based on user data. Only in that case will the Italian SA revoke its order that imposed a temporary restriction on the processing of Italian users’ data, as the urgency is no longer the basis of the order, so that ChatGPT will again be available from Italy.

The data protection authority elaborates on each of the required “concrete measures”, stipulating that the mandated information notice should “describe the arrangements and logic of the data processing necessary for the operation of ChatGPT, together with the rights granted to data subjects (users and non-users)”, adding that it “should be easily accessible and placed so that it can be read before logging into the service.”

Users from Italy must receive this notice before signing up and also confirm that they are over 18 years old, this further requires. While users who registered prior to the DPA cessation of data processing will need to see the notification when they access the reactivated service and also be pushed through an age gate to filter out underage users.

As for the legal basis related to OpenAI’s processing of people’s data for the purpose of training its algorithms, the Guarantor has narrowed the options available to two: consent or legitimate interests – stipulating that it immediately removes all references to the performance from a contract “in accordance with the [GDPR’s] accountability principle.” (OpenAI’s privacy policy currently cites all three grounds, but seems to rely most heavily on the performance of a contract to provide services such as ChatGPT.)

“This is without prejudice to the exercise of the SA’s investigative and enforcement powers in this regard,” it adds, confirming that it withholds judgment on whether the two remaining grounds can also be lawfully used for the purposes from OpenAI.

In addition, the GDPR provides data subjects with a range of access rights, including the right to rectify or delete their personal data. That is why the Italian regulator has also demanded that OpenAI implement tools so that data subjects – both users and non-users – can exercise their rights and have falsehoods that the chatbot generates about them corrected. Or, if correcting AI-generated lies about said individuals proves “not technically feasible,” the DPA stipulates that the company must provide a way to delete their personal information.

“OpenAI will need to make easily accessible tools available to enable non-users to exercise their right to object to the processing of their personal data that is relied upon for the operation of the algorithms. Users should be given the same right if legitimate interest is chosen as the legal basis for processing their data,” it adds, referring to another right the GDPR grants to data subjects when legitimate interest is used as the legal basis for processing. personal information.

All measures announced by the Garante are unforeseen, based on its preliminary concerns. And the press release notes that its formal investigations – “to identify possible breaches of law” – are continuing and could lead to a decision to “take additional or other action if deemed necessary after completion of the ongoing fact-finding process. ”

We’ve reached out to OpenAI for comment, but the company had not responded to our email as of this writing.


You may also like

About Us

Latest Articles