Meta Apologizes After Technical Error Floods Instagram Reels with Violent Content

Meta apologizes after technical error causes Instagram users to see graphic violence and explicit content in Reels feeds despite content filters being enabled.

Ryan Hansen Avatar
Ryan Hansen Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Meta issued an apology Thursday after a technical error caused Instagram users worldwide to encounter graphic violence, gore, and explicit content in their Reels feeds, despite many having strict content filters enabled.

Why it matters: The incident raises serious questions about Meta’s content moderation systems just weeks after the company announced major changes to its approach, including ending third-party fact-checking in favor of a community-driven model similar to X’s.

Technical Failure: The error affected Instagram’s recommendation algorithm in several concerning ways. Users reported seeing extremely disturbing content that violated Meta’s own policies against graphic material:

  • Videos showing people being killed or severely injured
  • Content labeled as “sensitive” appearing in regular feeds
  • Material that should have been restricted appearing for all users

User Experience: The content appeared suddenly and without warning for many users. According to multiple reports across social media platforms, the disturbing videos appeared unexpectedly in feeds that normally contained benign content:

  • One Reddit user described seeing “street fights, school shootings, murder, and gruesome accidents”
  • Another reported their feed abruptly shifting from “planes, watches, painting, cats” to “body horror and videos with Russian descriptions”
  • Some users reported encountering child exploitation material

“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended,” a Meta spokesperson stated according to CBS. “We apologize for the mistake.”

The company has not provided details about what caused the error or how many users were affected. Meta’s policies prohibit extremely graphic content, including “videos showing dismemberment, visible entrails, or charred bodies” and material featuring “sadistic remarks on imagery depicting human or animal suffering.” Remember when it was just unwanted selfies?

This incident comes amid significant changes at Meta, including the elimination of its third-party fact-checking program in January and substantial workforce reductions. The company has laid off approximately 21,000 employees over 2022 and 2023, many from trust and safety divisions.

Meta has explicitly denied any connection between this incident and its recent policy changes regarding fact-checking. However, some users and industry observers have questioned whether reduced human oversight may have contributed to the failure.

Despite the controversy, Meta’s stock rose approximately 1.5% in premarket trading Thursday, continuing a trend that has seen shares gain nearly 40% over the past year.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →