AI
AI

Dad Urges OpenAI to Remove False Accusation by ChatGPT Regarding His Children’s Deaths

Photo credit: arstechnica.com

Recent developments highlight pressing concerns regarding the accuracy of information generated by AI systems like ChatGPT, particularly surrounding the potential for misinformation to impact individuals adversely. A notable case involves Holmen, who has encountered persistent false narratives, including alarming accusations that continue to circulate, despite recent updates aimed at improving the model’s reliability. Noyb, an organization focused on privacy rights, reported that “ChatGPT now also searches the Internet for information about people, when it is asked who they are.” However, there are lingering concerns that despite these updates, erroneous data remains embedded in the system, which, according to Noyb, violates the General Data Protection Regulation (GDPR).

Noyb emphasizes that the implications of false personal data extend beyond what is publicly shared, asserting that the GDPR’s safeguards apply to all internal datasets as well. This sentiment underscores a critical aspect of data privacy: the necessity for companies to manage their information responsibly, regardless of how it may be utilized or disclosed.

Challenges in Data Management for OpenAI

Holmen’s situation reflects a broader issue that many users of ChatGPT face, as concerns about the accuracy of AI-generated content have prompted calls for accountability. For instance, shortly after the release of ChatGPT in late 2022, an Australian mayor threatened legal action against the company for defamation linked to false claims about his imprisonment. Furthermore, a real law professor was inaccurately associated with a fabricated sexual harassment scandal, as reported by The Washington Post. In a similar vein, a radio host pursued legal recourse against OpenAI due to fictitious allegations of embezzlement produced by the chatbot.

In response to these incidents, OpenAI has implemented certain filters to mitigate harmful outputs. Nevertheless, Noyb points out that this does not necessarily equate to the removal of false information from the AI’s training data. Data protection lawyer Kleanthi Sardeli argues that merely filtering outputs or providing disclaimers fails to shield individuals from reputational harm. “Adding a disclaimer that you do not comply with the law does not make the law go away,” Sardeli stated compellingly.

Sardeli further contends that companies must acknowledge their legal obligations under the GDPR. She urges that AI firms should not merely obscure false information while continuing to internally process it. The potential for damaging outcomes stemming from AI ‘hallucinations’ presents not only a legal challenge but also a significant ethical imperative for technology firms to uphold integrity and accuracy in their systems.

Source
arstechnica.com

Related by category

Discover the New $1,900 Color E Ink Monitor on the Market!

Photo credit: arstechnica.com Onyx International Unveils Boox Mira Pro: A...

Mark Zuckerberg Plans Premium Tier and Advertising for Meta’s AI App

Photo credit: www.theverge.com Meta AI to Introduce Paid Tier to...

Fortnite to Make iOS Comeback After Court Criticizes Apple’s “Clear Cover-Up”

Photo credit: arstechnica.com "Apple’s ongoing efforts to hinder competition will...

Latest news

Husband Claims ‘Extremely Uncommon’ for Accused to Host Gatherings

Photo credit: www.bbc.com Trial of Erin Patterson for Poisonous Mushroom...

May 2: Historical Events of the Day

Photo credit: www.historyextra.com The date of 2 May 1536 marked...

Save Up to 60% on Theme Parks and More!

Photo credit: www.travelzoo.com Discover exciting opportunities to explore well-known attractions...

Breaking news