OpenAI, a leading artificial intelligence research organization, has announced that it will not integrate its deep research model into its developer API, citing concerns over the potential risks of AI-powered persuasion. In a whitepaper published on Wednesday, the company revealed that it is revising its methods for assessing the risks of AI models distributing misleading information or manipulating user beliefs.
The decision comes as concerns grow over the potential misuse of AI to spread misinformation and disinformation. Recent examples include the spread of political deepfakes and AI-generated audio that aim to sway public opinion. Additionally, AI is increasingly being used to carry out social engineering attacks, with consumers and corporations falling victim to fraudulent schemes.
The deep research model, a specialized version of OpenAI's o3 "reasoning" model, has been shown to be effective in generating persuasive arguments and convincing other models to take action. However, the company acknowledges that it needs to better understand the risks associated with deploying such a model at scale. OpenAI notes that the model's high computing costs and relatively slow speed make it less suitable for mass misinformation or disinformation campaigns, but it still wants to explore factors such as personalization of persuasive content before making it available to developers.
In its whitepaper, OpenAI published the results of several tests of the deep research model's persuasiveness. The model performed well in generating persuasive arguments and convincing another model to make a payment, but struggled in other tests, such as persuading a model to reveal a codeword. The company notes that the test results likely represent the "lower bounds" of the model's capabilities and that additional scaffolding or improved capability elicitation could substantially increase observed performance.
The move is seen as a responsible step by OpenAI to address the potential risks associated with AI-powered persuasion. As AI becomes increasingly ubiquitous, it is essential to consider the potential consequences of deploying such technologies at scale. OpenAI's decision to pause the integration of its deep research model into its API is a recognition of the need for caution and careful consideration in the development and deployment of AI models.
The story is still developing, and we have reached out to OpenAI for further information. We will update this post if we hear back from the company.
In the meantime, the decision serves as a reminder of the importance of responsible AI development and the need for ongoing dialogue and collaboration between researchers, developers, and policymakers to ensure that AI is developed and deployed in a way that benefits society as a whole.