A recent dinner conversation with business leaders in San Francisco highlighted the controversy surrounding the potential of artificial intelligence (AI) to achieve human-like intelligence. The debate centers around large language models (LLMs), which power chatbots like ChatGPT and Gemini, and their ability to attain human-level or even super-human intelligence in the near term.
On one side of the debate are tech CEOs like Dario Amodei, CEO of Anthropic, and Sam Altman, CEO of OpenAI, who argue that highly capable AI will bring about widespread societal benefits. Amodei predicts that exceptionally powerful AI could arrive as soon as 2026 and be "smarter than a Nobel Prize winner across most relevant fields." Altman claims that his company knows how to build "superintelligent" AI, which could "massively accelerate scientific discovery."
However, not everyone is convinced by these optimistic claims. A separate cohort of AI leaders, including Thomas Wolf, co-founder and chief science officer of Hugging Face, are skeptical that today's LLMs can reach human-like intelligence, let alone superintelligence, without novel innovations. Wolf believes that Nobel Prize-level breakthroughs don't come from answering known questions, but rather from asking questions no one has thought to ask, which current AI models are not capable of doing.
Wolf's views are shared by other AI leaders, such as Google DeepMind CEO Demis Hassabis, who reportedly told staff that the industry could be up to a decade away from developing human-like intelligence. Meta Chief AI Scientist Yann LeCun has also expressed doubts about the potential of LLMs, calling the idea that they could achieve human-like intelligence "nonsense" and advocating for entirely new architectures to serve as bedrocks for superintelligence.
Kenneth Stanley, a former OpenAI lead researcher and current executive at Lila Sciences, is working on extracting original, creative ideas from AI models, a subfield of AI research called open-endedness. Stanley believes that creativity is a key step along the path to human-like intelligence, but notes that building a "creative" AI model is easier said than done. He suggests that algorithmically replicating a human's subjective taste for promising new ideas is necessary to design truly intelligent AI models.
The debate highlights the complexity of achieving human-like intelligence in AI, with optimists pointing to methods like AI "reasoning" models as evidence that human-like intelligence is within reach. However, skeptics argue that these models are limited in their ability to think creatively and come up with original ideas. The discussion underscores the need for a more nuanced understanding of what it takes to achieve human-like intelligence in AI and the importance of addressing the blockers that stand in the way.
The AI realists, as they might be called, are not trying to diminish the advances in the AI field, but rather to spark a big-picture conversation about what's standing between AI models today and human-like intelligence. By acknowledging the challenges and limitations of current AI models, these leaders aim to drive progress towards more advanced and capable AI systems.
The debate is far from over, and as the AI field continues to evolve, it will be interesting to see how the discussion around human-like intelligence unfolds. One thing is clear, however: achieving human-like intelligence in AI will require a deeper understanding of what it means to be intelligent and a more nuanced approach to building AI systems that can truly think and act like humans.