Misinformation Researcher Admits to Using ChatGPT in Court Filing, Citing 'Hallucinations'

Alexis Rowe

Alexis Rowe

December 04, 2024 · 3 min read
Misinformation Researcher Admits to Using ChatGPT in Court Filing, Citing 'Hallucinations'

A prominent misinformation researcher has admitted to using ChatGPT, a popular AI language model, to help with citations in a court filing, resulting in the addition of fake details that have sparked controversy. Jeff Hancock, the founder of the Stanford Social Media Lab, submitted an affidavit in support of Minnesota's "Use of Deep Fake Technology to Influence an Election" law, which is being challenged in federal court.

Hancock's filing came under scrutiny when attorneys for the plaintiffs, Christopher Khols, a conservative YouTuber, and Minnesota state Rep. Mary Franson, discovered that the citations in the document seemed to contain errors. The attorneys subsequently requested that the filing be excluded from consideration, citing its unreliability. In response, Hancock filed a new declaration, acknowledging that he had used ChatGPT to draft the original document, but denying that he used it to write the content.

According to Hancock, he used Google Scholar and GPT-4o to identify relevant articles and create a citation list. However, he claims that he did not realize the tool had generated "two citation errors, popularly referred to as 'hallucinations'" and added incorrect authors to another citation. Hancock expressed regret for any confusion caused and maintained that the substantive points in the declaration remain valid, supported by the most recent scholarly research in the field.

This incident raises important questions about the use of AI-generated content in legal and academic settings. While AI language models like ChatGPT can be powerful tools for streamlining research and writing tasks, they are not infallible and can introduce errors or biases. The "hallucinations" experienced by Hancock serve as a cautionary tale about the need for human oversight and fact-checking, even when working with advanced AI tools.

The implications of this incident extend beyond the specific court case, highlighting the potential risks of relying on AI-generated content in high-stakes contexts. As AI technology continues to evolve and become more integrated into various industries, it is essential to develop guidelines and best practices for its use, ensuring that the benefits of AI are realized while minimizing the risks of errors and misinformation.

In the context of the ongoing debate about the use of deep fake technology in influencing elections, Hancock's admission serves as a reminder of the need for vigilance and critical thinking in evaluating information. The incident underscores the importance of verifying sources and fact-checking information, particularly in the digital age where misinformation can spread quickly.

Ultimately, the controversy surrounding Hancock's use of ChatGPT in his court filing serves as a wake-up call for researchers, policymakers, and the broader tech community to reexamine their approach to AI-generated content and to develop more robust safeguards against the potential risks of misinformation and bias.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.