AI Transcription Tool Used in Hospitals Found to Hallucinate

Max Carter

Max Carter

October 27, 2024 · 2 min read
AI Transcription Tool Used in Hospitals Found to Hallucinate

A concerning discovery has been made about an AI transcription tool used in hospitals, powered by OpenAI's Whisper model. Researchers have found that the tool frequently invents entire passages of text when presented with moments of silence, a phenomenon known as hallucination.

The tool, developed by Nabla, has been used to transcribe over 7 million medical conversations and is employed by more than 30,000 clinicians and 40 health systems. While Nabla is reportedly aware of the issue and working to address it, the implications are significant, particularly for patients with language disorders such as aphasia.

The study, conducted by researchers from Cornell University, the University of Washington, and others, found that hallucinations occurred in about 1% of transcriptions, resulting in nonsensical phrases or even violent sentiments. The researchers also noted that the model had been trained on over a million hours of YouTube videos, which may have contributed to the problem.

OpenAI has responded to the findings, stating that they take the issue seriously and are working to improve the model, including reducing hallucinations. However, the incident raises important questions about the use of AI in high-stakes contexts, such as healthcare, and the need for rigorous testing and oversight.

The discovery is a timely reminder of the importance of addressing AI's hallucination problem, which has been observed in various applications, including Meta's AI model. As AI becomes increasingly integrated into critical systems, it is essential that developers and users alike are aware of these limitations and take steps to mitigate them.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.