Google's recent announcement of its AI co-scientist, an artificial intelligence tool designed to aid scientists in creating hypotheses and research plans, has been met with skepticism by experts in the field. Despite Google's claims that the tool has the potential to uncover new knowledge, many researchers believe it falls short of its promises and lacks the empirical data to support its usefulness.
Sarah Beery, a computer vision researcher at MIT, expressed doubts about the tool's ability to be seriously used by the scientific community. "This preliminary tool, while interesting, doesn't seem likely to be seriously used," she said. Beery's sentiments were echoed by Favia Dubyk, a pathologist affiliated with Northwest Medical Center-Tucson in Arizona, who criticized Google's lack of detail in its blog post describing the AI co-scientist. "The lack of information provided makes it really hard to understand if this can truly be helpful," Dubyk said.
Google's AI co-scientist is not the first time the tech giant has been criticized for trumpeting a supposed AI breakthrough without providing a means to reproduce the results. In 2020, Google claimed one of its AI systems trained to detect breast tumors achieved better results than human radiologists, but researchers from Harvard and Stanford published a rebuttal in the journal Nature, saying the lack of detailed methods and code in Google's research "undermined its scientific value."
Experts also pointed out that AI tools like Google's AI co-scientist often perform well in controlled environments but may fail when applied at scale. Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, emphasized the need for rigorous, independent evaluation of AI tools across diverse scientific disciplines. "We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation," KhudaBukhsh said.
One of the significant challenges in developing AI tools to aid in scientific discovery is anticipating the untold number of confounding factors. AI might be useful in areas where broad exploration is needed, but it's less clear whether AI is capable of the kind of out-of-the-box problem-solving that leads to scientific breakthroughs. KhudaBukhsh noted that many important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism.
Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools like Google's AI co-scientist focus on the wrong kind of scientific legwork. She sees value in AI that could automate technically difficult or tedious tasks, like summarizing new academic literature or formatting work to fit a grant application's requirements. However, she doesn't think there's much demand within the scientific community for an AI co-scientist that generates hypotheses, as many researchers derive intellectual fulfillment from this task.
Beery also noted that often the hardest step in the scientific process is designing and implementing the studies and analyses to verify or disprove a hypothesis, which isn't necessarily within reach of current AI systems. AI can't use physical tools to carry out experiments, and it often performs worse on problems for which extremely limited data exists.
Furthermore, AI's technical shortcomings and risks, such as its tendency to hallucinate, make scientists wary of endorsing it for serious work. KhudaBukhsh fears AI tools could simply end up generating noise in the scientific literature, not elevating progress. A recent study found that AI-fabricated "junk science" is already flooding Google Scholar, Google's free search engine for scholarly literature.
Sinapayen expressed concerns about the reliability of AI-generated research, saying she wouldn't trust AI today to execute tasks like literature review and synthesis reliably. "Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said.
In conclusion, while Google's AI co-scientist may have generated excitement, it has failed to impress experts in the field. The lack of empirical data, the limitations of AI in scientific research, and the risks associated with AI-generated research have all contributed to the skepticism surrounding this tool. As the scientific community continues to grapple with the potential and pitfalls of AI, it remains to be seen whether Google's AI co-scientist will be a game-changer or just another overhyped innovation.