However, the fact-checking initiative has faced criticism, with some arguing that it stifles diverse opinions, while others see it as necessary for public health and safety. In 2019, Meta also created an Oversight Board to help navigate these complex issues. The board acted as an independent body to resolve disputes about content removal and provided guidance on difficult cases, especially those outside of the United States.
Zuckerberg has now declared that Meta will shift back to a more free-speech-driven approach, distancing itself from its previous content moderation policies. While Meta continues to have legal obligations to adhere to laws, including those related to harmful speech like terrorism or child exploitation, the company now aims to reduce its role as the “content cop” and allow users to flag inaccurate posts, similar to X’s Community Notes approach.
This move could have consequences, as there are concerns that loosening the rules could lead to a more toxic online environment, similar to what has been observed on X (formerly Twitter) under Elon Musk’s leadership. If this approach leads to more harmful content, it could drive away users and advertisers, ultimately harming Meta’s business. The experience of managing online communities shows that without proper moderation, harmful content can easily dominate, pushing out valuable conversations.
Meta’s pivot, then, might be a move to appease certain political factions, but it risks undermining the integrity and safety of its platforms, leaving advertisers and users to decide if they are willing to tolerate a more chaotic online space.