OpenAI Faces New Privacy Complaint in Europe Over ChatGPT's Hallucinations

Elliot Kim

Elliot Kim

March 20, 2025 · 4 min read
OpenAI Faces New Privacy Complaint in Europe Over ChatGPT's Hallucinations

OpenAI, the developer of the popular AI chatbot ChatGPT, is facing another privacy complaint in Europe, this time from Norway, over the chatbot's tendency to hallucinate false information about individuals. The complaint, filed by privacy rights advocacy group Noyb, alleges that ChatGPT's generation of incorrect personal data violates the European Union's General Data Protection Regulation (GDPR).

The complaint stems from an incident where an individual in Norway discovered that ChatGPT had generated false information about him, claiming he had been convicted of murdering two of his children and attempting to kill the third. This is not an isolated incident, as Noyb points to other cases where ChatGPT has fabricated legally compromising information about individuals.

Under the GDPR, Europeans have the right to rectification of personal data, which means that individuals have the right to correct incorrect information generated about them. However, OpenAI does not offer a way for individuals to correct such information, instead opting to block responses for such prompts. Noyb argues that this is not sufficient, as the GDPR requires data controllers to ensure that personal data is accurate.

"The GDPR is clear. Personal data has to be accurate," said Joakim Söderberg, data protection lawyer at Noyb. "If it's not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough. You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true."

Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover. Enforcement could also force changes to AI products, as seen in the case of Italy's data protection watchdog, which fined OpenAI €15 million for processing people's data without a proper legal basis.

Despite the potential consequences, privacy watchdogs around Europe have adopted a more cautious approach to GenAI tools, taking time to figure out how the law applies. Ireland's Data Protection Commission, which has a lead GDPR enforcement role, has urged against rushing to ban GenAI tools, suggesting that regulators should instead take time to work out how the law applies.

Noyb's new complaint is intended to shake privacy regulators awake to the dangers of hallucinating AIs. The nonprofit has filed the complaint against OpenAI with the Norwegian data protection authority, hoping the watchdog will decide it is competent to investigate.

OpenAI has been contacted for a response to the complaint. In the meantime, Noyb remains concerned that incorrect and defamatory information about individuals could have been retained within the AI model, potentially causing reputational damage.

"Adding a disclaimer that you do not comply with the law does not make the law go away," noted Kleanthi Sardeli, another data protection lawyer at Noyb. "AI companies can also not just 'hide' false information from users while they internally still process false information."

As the investigation unfolds, it remains to be seen how regulators will respond to Noyb's complaint. One thing is clear, however: the issue of AI-generated false information is a pressing concern that requires urgent attention from policymakers, regulators, and tech companies alike.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.