Meta issued an apology Thursday after a technical error caused Instagram users worldwide to encounter graphic violence, gore, and explicit content in their Reels feeds, despite many having strict content filters enabled.
Why it matters: The incident raises serious questions about Meta’s content moderation systems just weeks after the company announced major changes to its approach, including ending third-party fact-checking in favor of a community-driven model similar to X’s.
Technical Failure: The error affected Instagram’s recommendation algorithm in several concerning ways. Users reported seeing extremely disturbing content that violated Meta’s own policies against graphic material:
- Videos showing people being killed or severely injured
- Content labeled as “sensitive” appearing in regular feeds
- Material that should have been restricted appearing for all users
User Experience: The content appeared suddenly and without warning for many users. According to multiple reports across social media platforms, the disturbing videos appeared unexpectedly in feeds that normally contained benign content:
- One Reddit user described seeing “street fights, school shootings, murder, and gruesome accidents”
- Another reported their feed abruptly shifting from “planes, watches, painting, cats” to “body horror and videos with Russian descriptions”
- Some users reported encountering child exploitation material
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended,” a Meta spokesperson stated according to CBS. “We apologize for the mistake.”
The company has not provided details about what caused the error or how many users were affected. Meta’s policies prohibit extremely graphic content, including “videos showing dismemberment, visible entrails, or charred bodies” and material featuring “sadistic remarks on imagery depicting human or animal suffering.” Remember when it was just unwanted selfies?
This incident comes amid significant changes at Meta, including the elimination of its third-party fact-checking program in January and substantial workforce reductions. The company has laid off approximately 21,000 employees over 2022 and 2023, many from trust and safety divisions.
Meta has explicitly denied any connection between this incident and its recent policy changes regarding fact-checking. However, some users and industry observers have questioned whether reduced human oversight may have contributed to the failure.
Despite the controversy, Meta’s stock rose approximately 1.5% in premarket trading Thursday, continuing a trend that has seen shares gain nearly 40% over the past year.