Author:
Michelle Riedlinger
(MENAFN- The Conversation)
Meta has announced it will abandon its fact-checking program, starting in the United States. It was aimed at preventing the spread of online lies among more than 3 billion people who use Meta's social media platforms, including Facebook, Instagram and Threads.
In a video , the company's chief, Mark Zuckerberg, said fact checking had led to“too much censorship”.
He added it was time for Meta“to get back to our roots around free expression”, especially following the recent presidential election in the US. Zuckerberg characterised it as a“cultural tipping point, towards once again prioritising speech”.
Instead of relying on professional fact checkers to moderate content, the tech giant will now adopt a“community notes” model, similar to the one used by X.
This model relies on other social media users to add context or caveats to a post. It is currently under investigation by the European Union for its effectiveness.
This dramatic shift by Meta does not bode well for the fight against the spread of misinformation and disinformation online.
Independent assessment
Meta launched its independent, third-party, fact-checking program in 2016.
It did so during a period of heightened concern about information integrity coinciding with the election of Donald Trump as US president and furore about the role of social media platforms in spreading misinformation and disinformation.
As part of the program, Meta funded fact-checking partners – such as Reuters Fact Check, Australian Associated Press, Agence France-Presse and PolitiFact – to independently assess the validity of problematic content posted on its platforms.
Warning labels were then attached to any content deemed to be inaccurate or misleading. This helped users to be better informed about the content they were seeing online.
A backbone to global efforts to fight misinformation
Zuckerberg claimed Meta's fact-checking program did not successfully address misinformation on the company's platforms, stifled free speech and lead to widespread censorship.
But the head of the International Fact-Checking Network, Angie Drobnic Holan, disputes this. In a statement reacting to Meta's decision, she said:
A large body of evidence supports Holan's position.
In 2023 in Australia alone, Meta displayed warnings on over 9.2 million distinct pieces of content on Facebook (posts, images and videos), and over 510,000 posts on Instagram, including reshares. These warnings were based on articles written by Meta's third-party, fact-checking partners.
An example of a warning added to a Facebook post.
Meta
Numerous studies have demonstrated that these kinds of warnings effectively slow the spread of misinformation.
Meta's fact‐checking policies also required the partner fact‐checking organisations to avoid debunking content and opinions from political actors and celebrities and avoid debunking political advertising.
Fact checkers can verify claims from political actors and post content on their own websites and social media accounts. However, this fact‐checked content was still not subject to reduced circulation or censorship on Meta platforms.
The COVID pandemic demonstrated the usefulness of independent fact checking on Facebook. Fact checkers helped curb much harmful misinformation and disinformation about the virus and the effectiveness of vaccines.
Importantly, Meta's fact-checking program also served as a backbone to global efforts to fight misinformation on other social media platforms. It facilitated financial support to up to 90 accredited fact-checking organisations around the world.
What impact will Meta's changes have on misinformation online?
Replacing independent, third-party fact checking with a“community notes” model of content moderation is likely to hamper the fight against misinformation and disinformation online.
Last year, for example, reports from The Washington Post and The Centre for Countering Digital Hate in the US found that X's community notes feature was failing to stem the flow of lies on the platform.
Meta's turn away from fact checking will also create major financial problems for third-party, independent fact checkers.
The tech giant has long been a dominant source of funding for many fact checkers . And it has often incentivised fact checkers to verify certain kinds of claims.
Meta's announcement will now likely force these independent fact checkers to turn away from strings-attached arrangements with private companies in their mission to improve public discourse by addressing online claims.
Yet, without Meta's funding, they will likely be hampered in their efforts to counter attempts to weaponise fact checking by other actors. For example, Russian President Vladimir Putin recently announced the establishment of a state fact-checking network following“Russian values” , in stark difference to the International Fact-Checking Network code of principles.
This makes independent, third-party fact checking even more necessary. But clearly, Meta doesn't agree.
MENAFN08012025000199003603ID1109066325
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.