(MENAFN- The Conversation)
Today, anyone with an internet connection can create and share deceptive content — from outright false information to a more underhanded distorting of facts.
Considering the vastness of the internet, and since protecting freedom of speech is paramount, online misinformation is an immense challenge to legislate away.
This has been an especially large problem amidst the ongoing COVID-19 pandemic, where false and unsubstantiated claims about the nature of the virus spread rapidly and widely.
Global misinformation campaigns targeting the safety and efficacy of vaccines have fuelled growing anti-vaccine rhetoric , stalling vaccination rates and causing preventable death.
This has illustrated the imminent threat misinformation poses to the functioning of our institutions, and to society as a whole.
Unprecedented reach
While misinformation itself is not new, the internet provides unprecedented reach for spreading it. Between 2014 and 2017, Russian content creators produced tens of thousands of pieces of false and modified content, reaching over 126 million American Facebook users alone.
Twitter has similarly dealt with a record amount of“fake news” in recent years, resulting in thousands of legal demands for content removal from governments worldwide.
Efforts to remove and label misinformation on social media platforms may help , but a more pertinent question is whether these efforts can match the rapid pace of misinformation spread.
In response to the growing pressure to control misinformation on their platforms, social media giants have generally relied on reactively removing or labelling false content after it's widely noticed and reported.
In the time this process takes, research has shown that fake news spreads farther and faster than truthful content. Misinformation is often perceived as“novel” and“sensational,” which swiftly grabs attention and gets viewed and shared more readily.
This is currently happening with the conflict in Ukraine, where several shocking images falsely claiming to show current events quickly reached hundreds of thousands of views on Twitter before being removed.
Labelling or removing content from these platforms forces companies to navigate the blurry lines of what constitutes false or misleading content, and what should be protected as freedom of expression.
This challenge recently surfaced as hundreds of physicians, scientists and musicians Neil Young and Joni Mitchell , called on Spotify to take action against medical misinformation promoted on their most popular podcast, the Joe Rogan Experience .
Inoculation theory
Even if social media giants manage to meaningfully crack down on misinformation on their platforms, what about the rest of the internet?
Rumble recently offered to host Joe Rogan on its video-streaming platform“with no censorship.” If that fails, what's to stop content creators and common users from wandering to websites hosted outside the jurisdiction of democratic lawmakers?
Read more: Meet Rumble, Canada's new 'free speech' platform — and its impact on the fight against online misinformation
If encountering online misinformation is inevitable, our best approach to mitigate its effects may be to improve users' abilities to recognize and dismiss it. One of the most promising ways to do this uses experiential learning, stemming from something called inoculation theory.
Inoculation theory is based on an analogy between resistance to persuasion and resistance to contagious disease. Just as exposing individuals to a weakened pathogen helps to protect them against severe illness (vaccination), tactfully exposing individuals to weakened forms of misinformation can improve their ability to recognize and resist it.
Researchers at Cambridge University recently produced a game that simulates creating online misinformation. (getbadnews.com/Cambridge University) Experiential learning
Studies in the areas of anti-vaccine and climate change misinformation have shown that exposing individuals to real world examples of misinformation, supplemented with explanations as to why it is flawed or incorrect, can improve users' resistance to misinformation better than pro-science messaging.
Researchers at Cambridge University recently produced a game that simulates creating online misinformation, and players learn to understand the motives and capabilities of those spreading fake content.
While developing digital and critical literacy skills in learners is important, without direct experience with misinformation people may still fall prey to its persuasive effects.
As of March 2, 2022, the word“misinformation” appears only once in Ontario's K-12 curriculum (unrelated to digital spaces) and doesn't appear at all in Québec's 38-page Digital Competency Development Continuum .
Educators and policy-makers must act to ensure future generations are adequately prepared to handle misinformation through experiential learning . They must be ready for what we know exists today, as well as for the digital landscape they will face going forward.
Controlled exposure
Advances in photoshop software and artificial intelligence promise to make judging the authenticity of online content even more difficult in the future. Deepfake videos mimicking public figures saying and doing things that never happened have already shown up across popular social media platforms, and have proven difficult to distinguish from reality for the common user.
Social bot accounts, using computer algorithms to impersonate real users, are also becoming increasingly sophisticated and hard to detect.
Thousands of these bots may be controlled by single individuals, powerfully changing public discourse by (falsely) conveying widespread support of certain viewpoints.
To combat this, researchers at Clemson University created a quiz where participants can practise distinguishing between legitimate social media users and falsified misinformation-spreading accounts, learning important cues to differentiate them along the way.
Empowering current and future generations to detect and dismiss false and misleading information will be pivotal in developing a democracy resilient to the threat of online misinformation. Controlled exposure to modified versions of its most insidious forms may be our best hope of doing so.
MENAFN02032022000199003603ID1103790521
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.