Meta has announced a significant shift in its content moderation strategy, replacing its fact-checking program with a community-driven model inspired by X’s Community Notes. The change is positioned as a way to reduce errors and prioritize freedom of expression, particularly in light of controversies over censorship during recent elections.
The new system will rely on user collaboration to assess the credibility of posts across Facebook, Instagram, and Threads. While touted as a method to empower individuals over institutions, critics have expressed apprehension about the potential for misinformation and harassment to flourish under the new framework.
This decision comes just a day after Meta revealed that Dana White, CEO of the Ultimate Fighting Championship, is joining its board of directors. Alongside White, the company also added John Elkann, CEO of Exor, and tech investor Charlie Songhurst to its leadership team. White emphasized his belief that social media and artificial intelligence are key to the future, signaling Meta’s commitment to innovation.
Reaction to Meta’s moderation overhaul has been sharply divided. Republican leaders, including President-elect Donald Trump, have praised the move as a step toward safeguarding free speech. Meanwhile, tech advocacy groups and researchers warn of the risks of loosening controls. Some experts suggest that strengthening existing moderation tools might have been a more effective approach to balancing expression and safety.
As Meta implements this bold transition, the decision may set a groundbreaking precedent for community-led moderation—or open the floodgates to unforeseen challenges.