Former Google CEO Eric Schmidt Warns Against Pursuing 'Superhuman' AI, Citing Global Instability Risks

Jordan Vega

Jordan Vega

March 05, 2025 · 3 min read
Former Google CEO Eric Schmidt Warns Against Pursuing 'Superhuman' AI, Citing Global Instability Risks

In a stark warning, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks have cautioned against the US pursuing a Manhattan Project-style push to develop artificial general intelligence (AGI) systems with "superhuman" intelligence. In a policy paper titled "Superintelligence Strategy," the co-authors argue that an aggressive bid to exclusively control AGI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations.

The paper's authors contend that a US-led effort to develop AGI could be perceived as a threat by other nations, particularly China, and might trigger hostile countermeasures. They write, "[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it." Instead, the co-authors propose a more measured approach, prioritizing defensive strategies to deter other countries from creating AGI systems that could pose a threat to global stability.

The warning comes at a critical time, as the US congressional commission has recently proposed a 'Manhattan Project-style' effort to fund AGI development, modeled after America's atomic bomb program in the 1940s. US Secretary of Energy Chris Wright has also stated that the US is at "the start of a new Manhattan Project" on AI. However, Schmidt, Wang, and Hendrycks argue that this approach could have unintended and far-reaching consequences.

The concept of Mutual Assured AI Malfunction (MAIM) is introduced in the paper, where governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponize AGI. The co-authors suggest that the US should expand its arsenal of cyberattacks to disable threatening AI projects controlled by other nations, as well as limit adversaries' access to advanced AI chips and open-source models.

The paper's authors identify a dichotomy in the AI policy world, with "doomers" believing that catastrophic outcomes from AI development are inevitable and advocating for slowing AI progress, and "ostriches" believing that nations should accelerate AI development and hope for the best. The co-authors propose a third way: a measured approach to developing AGI that prioritizes defensive strategies.

The warning is particularly notable coming from Schmidt, who has previously been vocal about the need for the US to compete aggressively with China in developing advanced AI systems. However, as the co-authors note, America's decisions around AGI don't exist in a vacuum, and a more cautious approach could be wise in the face of potential global instability.

As the world watches the US push the limits of AI, Schmidt and his co-authors suggest that it may be wiser to take a defensive approach, prioritizing global stability over the pursuit of "superhuman" AI. The implications of this warning are far-reaching, and it remains to be seen how the US and other nations will respond to the challenges and risks posed by AGI development.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.