TL;DR
- Meta is replacing its third-party fact-checking program with a community-driven system like X’s Community Notes, starting in the U.S. and expanding globally.
- The shift aims to simplify moderation, reduce mistakes, and prioritize user-reported context, while continuing strict moderation of high-severity violations like terrorism and exploitation.
- Meta is facing lawsuits in Kenya after firing moderators who tried to unionize. Allegations include poor working conditions and exposure to traumatic content.
- Kenya’s President William Ruto is proposing laws limiting contractor liability, which has sparked criticism for undermining workers’ rights.
Meta has announced the end of its third-party fact-checking program, replacing it with a community-driven system similar to X’s Community Notes.
The change will first be rolled out in the U.S., with plans to expand to other markets in the near future.
This shift will affect Facebook, Instagram, and Threads, marking a major transformation in the way content is moderated across Meta’s platforms.
Mark Zuckerberg, Meta’s CEO, outlined this new approach in a video statement. “We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms,” he said. “More specifically, we’re going to get rid of fact-checkers and replace them with community notes similar to X, starting in the U.S.”
Meta’s Vision for Community Moderation
The community notes system will allow users to flag and provide context to potentially misleading posts directly.
Zuckerberg stated that Meta aims to reduce the reliance on complex moderation systems that often result in errors.
“We built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes,” he noted. “Even if they accidentally censor just 1% of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship.”
While emphasizing the value of free expression, Meta’s CEO stressed that aggressive moderation will continue for high-severity violations, such as content related to terrorism, drugs, and child exploitation.
However, less critical cases, like debates around hot-button issues such as immigration and gender, will see reduced interventions, relying more on user-driven reporting.
Zuckerberg tied this decision to broader societal shifts, particularly in response to political pressures. “The recent elections also feel like a cultural tipping point toward, once again, prioritizing speech,” he said.
He criticized governments and legacy media for what he described as a growing push for censorship.
Why This Change Matters
Meta’s decision is also an acknowledgment of past challenges with its fact-checking system, launched in 2016.
The program worked with over 90 organizations worldwide, verifying content in more than 60 languages. Despite these efforts, the company faced criticism for perceived bias and overreach.
This move signals a pivot toward decentralized moderation and a focus on transparency, with users playing an active role in combating misinformation.
While the transition starts in the U.S., Zuckerberg confirmed it is part of a global strategy. “We’ll roll out community notes to other markets over time,” he added, underscoring Meta’s intention to implement these changes globally.
Meta’s Content Moderation Challenges in Kenya
As Meta pushes forward with these changes, the company is grappling with significant legal and operational challenges in Kenya.
In 2023, a group of Kenyan content moderators sued Meta and its then-contractor Sama, alleging wrongful termination after attempting to unionize.
The moderators had been responsible for filtering harmful content on Facebook, but they claimed they were dismissed unfairly and blacklisted from similar roles after Meta switched to a new contractor, Majorel.
Also Read: Out-of-court Settlement talks collapse in labor dispute between Kenyan content moderators and Meta
Kenya’s courts allowed the lawsuit to proceed, rejecting Meta’s claim that it cannot be sued locally. This decision set a precedent for holding tech companies accountable for the actions of their subcontractors.
One of the plaintiffs said, “We were exposed to violent and disturbing content without proper support, and then dismissed when we tried to advocate for better working conditions.”
The moderators also highlighted how viewing traumatic content led to widespread psychological issues, including post-traumatic stress disorder (PTSD).
Mark Zuckerberg did not address these lawsuits directly, but critics argue that the content moderation outsourcing model sacrifices worker well-being in favor of operational efficiency.
President William Ruto of Kenya has intervened in this controversy, proposing changes to labor laws to limit tech companies’ liability when outsourcing to local contractors.
This proposed regulation, however, has faced backlash, with workers’ rights advocates calling it an attack on labor protections.
Additionally, other lawsuits highlight Meta’s struggles with content moderation and its impact on vulnerable populations, particularly in Africa.
In one recent case, content moderators accused Meta of turning a blind eye to threats and abuses tied to its platform.