Meta Abandons Fact-Checking, Leaving Users to Wade Through Hate and Disinformation

Riley King

Riley King

January 08, 2025 · 4 min read
Meta Abandons Fact-Checking, Leaving Users to Wade Through Hate and Disinformation

Meta, the parent company of Facebook, Instagram, and WhatsApp, has announced that it is phasing out its third-party fact-checking program, a move that experts warn could lead to a proliferation of disinformation and hate online. The program, launched in 2016, partnered with independent fact-checkers to identify and review misinformation across Meta's social media platforms.

The decision has sparked widespread criticism from researchers and advocates, who argue that the move will make it easier for false information to spread online. Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN) at Poynter, warned that the program's demise would "hurt Meta's users first" because it had been effective in reducing the virality of hoax content and conspiracy theories.

Meta CEO Mark Zuckerberg claimed that the decision was a matter of promoting free speech, while also accusing fact-checkers of being "too politically biased." However, Holan countered that the video was "incredibly unfair" to fact-checkers, who had worked with Meta as partners for nearly a decade. Fact-checkers reviewed content and rated its accuracy, but Meta made the final call on removing content or limiting its reach.

The fact-checking program had been effective in serving as a "speed bump in the way of false information," Holan noted. Content that was flagged typically had a screen placed over it to let users know that fact-checkers found the claim questionable and asked whether they still wanted to see it. The program covered a broad range of topics, from false information about celebrities to claims about miracle cures.

Experts warn that the decision could have far-reaching consequences, particularly for already targeted communities. Nicole Sugerman, campaign manager at the nonprofit Kairos, warned that abandoning fact-checking could lead to unchecked hateful disinformation about Black, brown, immigrant, and trans people, which too often leads to offline violence.

Scientists and environmental groups are also concerned about the changes, which could lead to the proliferation of anti-scientific content on Meta platforms. Kate Cell, senior climate campaign manager at the Union of Concerned Scientists, warned that the decision would allow misinformation and disinformation to continue to spread unchecked.

Michael Khoo, a climate disinformation program director at Friends of the Earth, likened the Community Notes approach to the fossil fuel industry's marketing of recycling as a solution to plastic waste. In reality, recycling has done little to stem the tide of plastic pollution, and the strategy puts the onus on consumers to deal with a company's waste. "[Tech] companies need to own the problem of disinformation that their own algorithms are creating," Khoo said.

The decision has also raised concerns about Meta's motivations, with some accusing the company of currying favor with President-elect Trump. Zuckerberg's video announcement was seen as an attempt to appease Trump, who has been critical of social media platforms' content moderation policies.

As Meta shifts responsibility to users to weed out lies on its platforms, experts warn that the consequences could be severe. The move could lead to a proliferation of disinformation and hate online, with real-world consequences for already targeted communities. It remains to be seen how Meta's decision will play out, but one thing is clear: the company's abandonment of fact-checking is a worrying development for online safety and accountability.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.