The rise of generative AI models like ChatGPT has led many to rely on them as a convenient search engine, but a recent investigation by The Verge has exposed the dangers of doing so. The study found that these AI models are prone to providing false information, often without any clear sources or citations to back up their claims.
The investigation began when a commentator cited a pardon granted by President Woodrow Wilson of his brother-in-law Hunter deButts. However, further research revealed that no such pardon existed, and the commentator later attributed the mistake to ChatGPT. This sparked a deeper dive into the reliability of generative AI models as a source of information.
Using ChatGPT, the investigator asked how many US Presidents have pardoned their relatives, and the AI model provided a mix of correct and incorrect answers. While it correctly stated that Bill Clinton pardoned his half-brother Roger Clinton, it also falsely claimed that George H.W. Bush pardoned his son Neil. Further research revealed that Neil Bush was not pardoned by his father, and the claim was likely fabricated by ChatGPT.
The investigation also found that ChatGPT provided other incorrect answers, including a claim that Jimmy Carter pardoned his brother Billy, which was later retracted by Esquire due to an error. The AI model also cited Gerald Ford's pardon of Richard Nixon and Andrew Johnson's amnesty to former Confederate leaders as examples of presidents pardoning their relatives, which were not relevant to the question.
The problem lies in the way generative AI models are designed to provide answers. They often rely on patterns and associations in the data they were trained on, rather than verifying information through credible sources. This can lead to the spread of misinformation, as people may trust the answers provided by these AI models without fact-checking them.
Experts warn that relying on generative AI models as a search engine can be risky, especially for those who are not cautious or persnickety in their research. The lack of transparency in the sources used by these AI models makes it difficult to verify the accuracy of the information provided.
The degradation of our information environment is a growing concern, and the rise of generative AI models that spread misinformation only exacerbates the problem. As technology continues to advance, it is essential to design systems that prioritize accuracy and transparency, rather than convenience and speed.
In conclusion, the investigation highlights the dangers of relying on generative AI models like ChatGPT as a search engine. While they may provide quick and convenient answers, they are prone to spreading misinformation and can lead to the degradation of our information environment. It is essential to approach these AI models with caution and to always fact-check the information they provide.