Signal President Meredith Whittaker has sounded the alarm on the potential risks of agentic AI, warning that it could compromise user privacy and security. Speaking at the SXSW conference in Austin, Texas, Whittaker cautioned that the use of AI agents could have a "profound issue" with both privacy and security.
Whittaker explained that AI agents are being marketed as a way to add value to users' lives by handling various online tasks on their behalf. For instance, AI agents could look up concerts, book tickets, schedule events on calendars, and message friends. However, to perform these tasks, the AI agent would need access to sensitive user data, including web browser history, credit card information, calendars, and messaging apps.
Whittaker warned that this level of access would require "something that looks like root permission, accessing every single one of those databases – probably in the clear, because there's no model to do that encrypted." This, she argued, would create a significant security risk, particularly if the AI model is powerful enough to process this data in the cloud.
Whittaker's concerns are not limited to the security risks of agentic AI. She also warned that integrating AI agents with messaging apps like Signal would undermine the privacy of users' messages. The AI agent would need to access the app to text friends and pull data back to summarize those texts, compromising the confidentiality of the messages.
Whittaker's comments followed her earlier remarks on the AI industry's reliance on a surveillance model with mass data collection. She argued that the "bigger is better AI paradigm" – which prioritizes the collection of more data – has potential consequences that are not beneficial. With agentic AI, Whittaker warned that we would further undermine privacy and security in the name of convenience.
The implications of Whittaker's warnings are far-reaching. As AI agents become more prevalent, users may be unwittingly compromising their privacy and security in exchange for the convenience of having tasks performed on their behalf. Whittaker's comments serve as a timely reminder of the need for caution and consideration in the development and deployment of agentic AI.
In conclusion, Whittaker's warnings on the risks of agentic AI are a call to action for the tech industry to prioritize user privacy and security in the development of AI agents. As we move forward in this new paradigm of computing, it is essential that we do not sacrifice our privacy and security for the sake of convenience.