Privacy Warriors Whip Out GDPR After ChatGPT Wrongly Accuses Dad Of Child Murder

A Norwegian man was shocked when ChatGPT falsely claimed in a conversation he murdered his two sons and tried to kill a third - mixing in real details about his personal life.
Now, privacy lawyers say this blend of fact and fiction breaches GDPR rules.
Austrian non-profit None Of Your Business (noyb) filed a complaint [PDF] against OpenAI to Norway's data protection authority Thursday, accusing the Microsoft-backed super-lab of violating Europe's General Data Protection Regulation (GDPR) Article 5. The filing claims ChatGPT falsely portrayed Arve Hjalmar Holmen as a child murderer in its output, while mixing in accurate personal details such as his hometown and the number and gender of his children. According to the rules, personal data must be accurate, no matter how it's processed.
"The GDPR is clear," said noyb data-protection lawyer Joakim Söderberg. "Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth."
Getting that false information corrected is easier said than done, as noyb has previously argued. The group, led by privacy warrior Max Schrems, filed a similar complaint against OpenAI last year, claiming the outfit made it impossible to fix false personal data in ChatGPT's outputs.
In its statement on the latest complaint, noyb said OpenAI previously argued it couldn't correct false data in the model's output, which is generated on the fly using statistics and an element of randomness. Getting things wrong is inherent in the design of today's generative large neural networks.
The lab said it could only "block" certain data, using a filter at the output and/or input, when specific prompts are used, leaving the system capable of spitting out wrong info. Under GDPR, noyb argues, it makes no difference whether bad output ever makes it through safeguards to the public or not. False information still violates Article 5's accuracy requirement.
OpenAI has tried to sidestep its obligations by adding a disclaimer that says the tool "can make mistakes", noyb added, but argues that doesn't get the multi-billion-dollar biz off the hook.
Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough
"Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn't enough," said Söderberg. "You can't just spread false information and in the end add a small disclaimer saying that everything you said may just not be true."
This ain't the first time OpenAI has been accused of peddling in defamation, with a Georgia resident suing the outfit in 2023 after ChatGPT incorrectly told a journalist he had embezzled money from a gun rights group. The ChatGPT maker also ran into trouble in Australia when it falsely linked a mayor to a foreign bribery scandal. The same year, the US Federal Trade Commission opened a probe into OpenAI's handling of personal data and potential violations of consumer protection laws.
- Brits end probe into Microsoft's $13B bankrolling of OpenAI
- FTC urged to freeze OpenAI's 'biased, deceptive' GPT-4
- OpenAI asks Uncle Sam to let it scrape everything, stop other countries complaining
- GPT apps fail to disclose data collection, study finds
This latest complaint could result in OpenAI being ordered to update its model to somehow block hallucinated information, limit processing of Holmen's data, or pay a fine; it's up to regulators. But the AI giant may already have a partial out. noyb acknowledged that newer ChatGPT models, which now search the web for real-time information to incorporate into their output, no longer generate false claims about Holmen.
AI companies can also not just 'hide' false information from users while they internally still process false information
While the date of Holmen's defamatory conversation with ChatGPT is redacted in the complaint, the document notes that it occurred prior to OpenAI's release of ChatGPT models able to search the live internet in October 2024.
"ChatGPT now also searches the internet for information about people, when it is asked who they are," noyb said. "For Arve Hjalmar Holmen, this luckily means that ChatGPT has stopped telling lies about him."
Nonetheless, the complaint notes that a web link to the original conversation still exists, indicating the false information remains within OpenAI's systems. Noyb argues the data may have been used to further train the models, meaning the inaccuracies persist behind the scenes, even if they're no longer shown to users, keeping the alleged GDPR violation relevant.
"AI companies can also not just 'hide' false information from users while they internally still process false information," said noyb data protection lawyer Kleanthi Sardeli.
The Register has asked OpenAI for comment. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more