
Meta has announced new measures to curb the spread of “unoriginal” content on Facebook, targeting accounts that routinely repost others’ videos, photos, and text without significant changes or attribution.
In a statement issued on Monday, the social media giant said it has already removed around 10 million accounts this year that were impersonating popular content creators. A further 500,000 accounts have faced punitive actions for engaging in spam-like behavior or generating fake engagement.
“These accounts will face a reduction in content reach and be barred from accessing Facebook’s monetisation programmes,” Meta said, adding that repeat offenders may lose distribution privileges altogether.
The announcement comes shortly after YouTube clarified its own stance on reused and AI-generated content, amid growing concern over the proliferation of low-effort, mass-produced videos on digital platforms. Known as “AI slop,” these videos often feature stitched-together images, clips, or computer-generated voiceovers, contributing to a flood of low-quality media.
Focus on Intent, Not Interaction
Meta clarified that its policy is not aimed at users engaging creatively with content, such as through reaction videos, commentary, or participation in online trends. The focus, instead, remains on accounts that republish others’ work without meaningful contribution or originality.
To address this, Facebook will begin demoting duplicate videos in users’ feeds to ensure that original creators receive rightful credit and visibility. The company is also testing a system that will add links to duplicate posts directing users back to the original content.
Push for Authenticity Amid AI Proliferation
While the company’s latest announcement does not explicitly mention artificial intelligence, it alludes to AI-generated content by urging creators to avoid stitching together clips or simply placing watermarks over others’ material.
Meta’s guidelines advise creators to focus on “authentic storytelling” and high-quality video captions — a possible critique of the growing use of unedited AI-generated subtitles. It has also reiterated its long-standing rule discouraging the cross-posting of content from other platforms without adaptation.
Users Raise Concerns Over Enforcement
The move comes amid heightened criticism over Meta’s content moderation policies, particularly on Instagram, where users claim wrongful account takedowns due to algorithmic errors and a lack of human support. A petition demanding improvements to Meta’s enforcement system has gained nearly 30,000 signatures, highlighting frustration among small business owners and content creators.
Although Meta has not yet publicly addressed these concerns, the company stated that new post-level insights will help users understand if and why their content is being deprioritised. Creators can now track their content’s performance and receive alerts regarding potential penalties via the Professional Dashboard.
Tackling Fake Accounts at Scale
In its latest transparency figures, Meta reported that 3% of Facebook’s global monthly active users are fake accounts. From January to March 2025 alone, the company took action against one billion fake profiles.
The firm has also moved away from internal fact-checking and is instead piloting a version of Community Notes in the United States, akin to a similar feature used by X (formerly Twitter). This crowdsourced system allows users to assess whether content aligns with Meta’s standards and verify its accuracy.
Meta says the rollout of these new content enforcement policies will be gradual, allowing creators time to adjust.