The UK government has announced a significant pivot in its approach to artificial intelligence, renaming the AI Safety Institute to the AI Security Institute and shifting its focus from exploring areas like existential risk and bias in large language models to strengthening cybersecurity and protecting against AI-related national security risks.
The move is part of the government's broader effort to boost the economy and industry through AI, as outlined in its Plan for Change announced in January. The renamed institute will work closely with the national security community to tackle risks associated with AI, including criminal misuse.
Alongside the renaming, the government has also announced a new partnership with Anthropic, a company founded by former OpenAI executives. The partnership will explore the use of Anthropic's AI assistant, Claude, in public services, with the goal of enhancing efficiency and accessibility. Additionally, Anthropic will contribute to scientific research and economic modeling, and provide tools to evaluate AI capabilities in the context of identifying security risks.
Anthropic co-founder and CEO Dario Amodei expressed enthusiasm for the partnership, stating that "AI has the potential to transform how governments serve their citizens." The deal marks a significant step in the government's plan to work with various foundational AI companies, as announced by Secretary of State for Technology Peter Kyle in January.
The shift in focus from AI safety to security may come as no surprise, given the government's emphasis on development and growth in its Plan for Change. The plan notably omitted mentions of "safety," "harm," "existential," and "threat," suggesting a deliberate move away from cautionary approaches to AI development.
Instead, the government is promoting a more optimistic narrative around AI, touting its potential to modernize the economy and improve public services. Civil servants will have access to their own AI assistant, "Humphrey," and are being encouraged to share data and leverage AI to speed up their work. Consumers, meanwhile, will benefit from digital wallets for government documents and chatbots.
While AI safety issues have not been resolved, the government's message appears to be that they cannot be considered at the expense of progress. Secretary of State Peter Kyle emphasized that the renamed institute's focus on security will ensure citizens and allies are protected from those who would misuse AI against institutions and democratic values.
Ian Hogarth, chair of the AI Security Institute, added that the institute's focus on security has been present from the start, and that the new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling risks.
The UK government's shift in focus comes as the AI Safety Institute in the US faces an uncertain future, with US Vice President J.D. Vance hinting at its potential dismantling in a recent speech in Paris.
The implications of this pivot will be closely watched, as the UK government seeks to balance its ambitions for AI-driven growth with the need to address emerging security risks.