Meta, the parent company of Instagram, has fixed an error that caused some users to see a flood of graphic and violent videos in their Instagram Reels feed, despite having the platform's "Sensitive Content Control" feature enabled. The fix comes after users reported seeing horrific and violent content that violated Meta's own content moderation policies.
A Meta spokesperson confirmed the issue in a statement to CNBC, saying, "We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake." The company did not provide further details on how the error occurred or how many users were affected.
According to Meta's policy, content that includes "videos depicting dismemberment, visible innards or charred bodies," and "sadistic remarks towards imagery depicting the suffering of humans and animals" is prohibited. However, users reported seeing videos that appeared to show dead bodies, and graphic violence against humans and animals, raising concerns about the platform's ability to enforce its own rules.
The error is particularly noteworthy given Meta's recent announcement to loosen its content moderation policies to better promote free speech. The move has been seen as the company repositioning itself for the Trump presidency, and has raised concerns among critics who argue that the platform is not doing enough to protect users from harmful content.
The incident highlights the ongoing challenges faced by social media companies in balancing free speech with the need to protect users from harmful or offensive content. While Meta's apology and swift action to fix the error are welcome, the incident raises questions about the company's ability to effectively enforce its own policies and protect users from graphic and violent content.
The incident also underscores the importance of robust content moderation policies and practices, particularly on platforms like Instagram where users are exposed to a vast array of content. As social media companies continue to grapple with the complexities of content moderation, incidents like this serve as a reminder of the need for vigilance and accountability in protecting users from harmful content.
In the aftermath of this incident, users and regulators will be watching closely to see how Meta implements its revised content moderation policies and ensures that users are protected from graphic and violent content. The company's ability to effectively address these concerns will have significant implications for the future of social media and the role of technology companies in shaping online discourse.