Chinese AI Startup Sand AI Censors Politically Sensitive Images in Video-Generating Model

Taylor Brooks

Taylor Brooks

April 22, 2025 · 3 min read
Chinese AI Startup Sand AI Censors Politically Sensitive Images in Video-Generating Model

Chinese AI startup Sand AI has released an openly licensed video-generating AI model, Magi-1, which has garnered praise from entrepreneurs like Microsoft Research Asia founding director Kai-Fu Lee. However, TechCrunch's testing has revealed that the hosted version of the model censors images that might raise the ire of Chinese regulators, sparking concerns about information control and censorship.

Magi-1 is a significant achievement in the field of AI-generated video, capable of generating high-quality, controllable footage that captures physics more accurately than rival open models. The model requires substantial computational power, with 24 billion parameters and a need for between four and eight Nvidia H100 GPUs to run. As a result, many users are reliant on Sand AI's platform to test drive Magi-1.

However, TechCrunch's testing has shown that Sand AI's platform blocks image uploads of politically sensitive content, including images of Xi Jinping, Tiananmen Square, and Tank Man, as well as the Taiwanese flag and insignias supporting Hong Kong liberation. The filtering appears to be happening at the image level, and renaming image files does not circumvent the blocking.

This censorship is not unique to Sand AI. Hailuo AI, a Shanghai-based startup, also blocks photos of Xi Jinping on its generative media platform. However, Sand AI's filtering appears to be particularly aggressive, with Hailuo allowing images of Tiananmen Square. The blocking of politically sensitive content is likely a response to China's stringent information controls, which require models to comply with laws that forbid generating content that "damages the unity of the country and social harmony."

Interestingly, while Chinese models tend to block political speech, they often have fewer filters than their American counterparts for pornographic content. This disparity has raised concerns about the inconsistent application of information control and the potential for exploitation. As 404 recently reported, a number of video generators released by Chinese companies lack basic guardrails that prevent people from generating nonconsensual nudity.

The implications of Sand AI's censorship are far-reaching, with potential consequences for the development and deployment of AI models in China. The incident highlights the tension between the need for innovation and the need for responsible AI development, particularly in regions with strict information controls. As the use of AI-generated video becomes more widespread, it is essential to consider the ethical and societal implications of these models and to ensure that they are developed and deployed in a responsible and transparent manner.

In conclusion, Sand AI's Magi-1 model is a significant achievement in the field of AI-generated video, but the censorship of politically sensitive content raises important questions about information control and responsible AI development. As the AI landscape continues to evolve, it is crucial to prioritize transparency, accountability, and ethical considerations in the development and deployment of AI models.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.