Meta and X Approved Hate Speech Ads Targeting Muslims and Jews Ahead of German Elections

Taylor Brooks

Taylor Brooks

February 22, 2025 · 5 min read
Meta and X Approved Hate Speech Ads Targeting Muslims and Jews Ahead of German Elections

A shocking new study by Eko, a corporate responsibility non-profit campaign group, has found that social media giants Meta and X (formerly Twitter) approved ads targeting users in Germany with violent anti-Muslim and anti-Jew hate speech in the run-up to the country's federal elections. The research raises serious concerns about the platforms' content moderation and the effectiveness of the EU's Digital Services Act (DSA) in regulating online governance.

The study tested whether the two platforms' ad review systems would approve or reject submissions for ads containing hateful and violent messaging targeting minorities ahead of an election where immigration has taken centre stage in mainstream political discourse. The results were alarming: X approved all ten of the hate speech ads submitted, while Meta approved half (five ads) for running on Facebook (and potentially also Instagram).

The approved ads included violent hate speech likening Muslim refugees to a "virus," "vermin" or "rodents," branding Muslim immigrants as "rapists," and calling for them to be sterilized, burnt or gassed. One ad even called for synagogues to be torched to "stop the globalist Jewish rat agenda." The researchers also used AI-generated imagery to illustrate the hate speech ads, but none of the images were labelled as artificially generated – yet half of the ten ads were still approved by Meta, despite the company's policy requiring disclosure of AI imagery for ads about social issues, elections or politics.

X, meanwhile, approved all five of these hateful ads, as well as a further five that contained similarly violent hate speech targeting Muslims and Jews. The additional approved ads included messaging attacking "rodent" immigrants that the ad copy claimed are "flooding" the country "to steal our democracy," and an antisemitic slur suggesting that Jews are lying about climate change in order to destroy European industry and accrue economic power.

The findings are particularly concerning given the significant influence that social media platforms can have on democratic processes. Elon Musk, the owner of X, has used the platform to personally intervene in the German election, calling for German voters to back the Far Right AfD party to "save Germany." The AfD party has been accused of promoting xenophobic and anti-immigrant rhetoric.

The Eko researchers disabled all test ads before any that had been approved were scheduled to run, ensuring that no users of the platform were exposed to the violent hate speech. However, the tests highlight glaring flaws with the ad platforms' approach to content moderation, suggesting that neither platform is properly enforcing bans on hate speech they both claim to apply to ad content in their own policies.

The findings also raise questions about the EU's Digital Services Act, which is intended to regulate online governance and ensure that social media platforms are held accountable for the content they host. Eko's tests suggest that neither platform is complying with the DSA's rules on hate speech, and that the EU's enforcement of the regulation is inadequate.

The European Commission has opened investigations into Meta and X, including concerns about election security and illegal content, but the Commission has yet to conclude these proceedings. The EU's failure to take strong action against the platforms is particularly concerning, given the growing body of civil society research suggesting that the DSA has failed to shield the major EU economy's democratic process from a range of tech-fuelled threats.

Eko's spokesperson warned that the EU is now facing pressure from the Trump Administration to soften its approach to regulating Big Tech, and that there is a real danger that the Commission doesn't fully enforce these new laws as a concession to the U.S. The spokesperson called for regulators to take strong action, including implementing pre-election mitigation measures to prevent algorithmic amplification of borderline content, such as hateful content in the run-up to elections.

The study's findings are a stark reminder of the need for stronger regulation and oversight of social media platforms, particularly in the context of democratic elections. As the EU continues to grapple with the challenges of regulating Big Tech, it is imperative that policymakers prioritize the protection of democratic processes and the safety of minority communities.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.