China Develops AI-Powered Censorship System to Monitor Online Activity

Alexis Rowe

Alexis Rowe

March 26, 2025 · 4 min read
China Develops AI-Powered Censorship System to Monitor Online Activity

A leaked database has revealed that China has developed a sophisticated AI-powered censorship system designed to monitor and flag sensitive online content, further expanding its already formidable censorship machine. The system, which appears to be primarily geared towards censoring Chinese citizens online, has the potential to be used for other purposes, such as improving Chinese AI models' already extensive censorship capabilities.

The database, seen by TechCrunch, contains over 133,000 examples of content considered sensitive by the Chinese government, including topics related to politics, social life, and the military. The system uses a large language model (LLM) to automatically flag content deemed "highest priority," which includes topics such as pollution and food safety scandals, financial fraud, and labor disputes. Any form of "political satire" is also explicitly targeted, with the system designed to flag content that uses historical analogies to make points about current political figures.

The dataset was discovered by security researcher NetAskari, who found it stored in an unsecured Elasticsearch database hosted on a Baidu server. While there is no indication of who built the dataset, records show that the data is recent, with the latest entries dating from December 2024. The Chinese Embassy in Washington, D.C. has denied any involvement, stating that China attaches great importance to developing ethical AI.

Xiao Qiang, a researcher at UC Berkeley who studies Chinese censorship, has examined the dataset and believes it is "clear evidence" that the Chinese government or its affiliates want to use LLMs to improve repression. "Unlike traditional censorship mechanisms, which rely on human labor for keyword-based filtering and manual review, an LLM trained on such instructions would significantly improve the efficiency and granularity of state-led information control," Qiang told TechCrunch.

The development of this AI-powered censorship system adds to growing evidence that authoritarian regimes are quickly adopting the latest AI tech to monitor and control online activity. In February, OpenAI reported that it had caught multiple Chinese entities using LLMs to track anti-government posts and smear Chinese dissidents. The use of AI in censorship is particularly concerning, as it can make repression more efficient and sophisticated, allowing governments to target even subtle criticism at a vast scale.

Michael Caster, the Asia program manager of rights organization Article 19, explained that the dataset's reference to "public opinion work" suggests that it is intended to serve Chinese government goals, particularly in the realm of censorship and propaganda. The Cyberspace Administration of China (CAC), a powerful government regulator, oversees "public opinion work," which aims to ensure that Chinese government narratives are protected online, while alternative views are purged.

The implications of this development are far-reaching, with the potential to further erode online freedom and stifle dissent in China. As Xiao Qiang noted, "I think it's crucial to highlight how AI-driven censorship is evolving, making state control over public discourse even more sophisticated, especially at a time when Chinese AI models such as DeepSeek are making headwaves." The use of AI in censorship also raises concerns about the potential for other authoritarian regimes to adopt similar technologies, further threatening online freedom and democracy worldwide.

If you have any information about the use of AI in state oppression, you can contact TechCrunch securely via Signal or SecureDrop.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.