The Federal Government's Proposed Mis- And Disinformation Laws Need To Have Clearer Definitions And Include AI


Author: Lorraine Finlay

(MENAFN- The Conversation) Dealing with misinformation and disinformation, particularly in Political debate, is something that has perplexed governments around the world. How do we make sure people are not being misled – deliberately or otherwise – while safeguarding freedom of speech?

There can be no doubt about the urgency of the issue. The Global Risks Report 2024 recently identified misinformation and disinformation as the top global risk over the next two years.

The emergence of artificial intelligence (AI), with its ability to generate mass-fabricated content, has made the issue all the more difficult.

In mid-2023, the Albanese government put forward its own solution: a proposed Disinformation Bill . If passed, this legislation would empower the Australian Communications and Media Authority (ACMA) to decide when companies in certain industries need to crack down on misinformation and disinformation.

The government received 2,418 public submissions during the consultation period. A submission from the Australian Human Rights Commission concluded the bill failed to strike an appropriate balance between addressing misinformation and disinformation and protecting freedom of expression.

As a result, the draft law is being revised. The minister has indicated a new bill will be introduced later this year.

What are misinformation and disinformation?

“Misinformation” and“disinformation” are words that are often used but can mean very different things, depending on who is using them. It is essential to be precise about these definitions to ensure any law does not end up improperly restricting access to diverse perspectives or censoring different views.

Internationally, misinformation is generally stated as being the spread of false information. Disinformation is the spread of false information knowing it is false.

There is no single legal definition that has been universally accepted. The only broad consensus is that both misinformation and disinformation contain some element of mistruth. But even that raises issues, as pinning down what is true can be harder than it looks.

Commentators in the communication science field say information is rarely verifiable . Other experts say the recent focus on fact-checking and accuracy prompted by misinformation and disinformation can ignore scientific, sociopolitical and distributional uncertainties behind a statement. Information can also be true at a certain point in time, but false at another.

The draft bill proposed in Australia cast a very wide net, with a low harm threshold and a number of controversial exemptions. It defined misinformation as false, misleading or deceptive information provided on a digital service to end-users in Australia that is reasonably likely to cause or contribute to serious harm by being disseminated. Disinformation was defined as information that is additionally disseminated with the intention to deceive.

As there is no defined threshold for the minimum contribution that is required, this means the legislation could capture communications that add one inadvertent teaspoon to an ocean of harm. Platforms like X (formerly Twitter) are bursting with users sharing their perspectives on complex issues within set character limits. The potential to mislead others by inadvertently omitting key facts or nuances, or over-simplifying a complex topic, could become an easily justified means of removing posts if those posts are characterised as misinformation or disinformation.

Unlike in other countries, the original bill excluded government-authorised content. Commentators have raised serious concerns about this, noting that elected officials may themselves spread mis- or disinformation.

This was also a common concern raised by the public's comments on the draft legislation. It suggests any proposed law will struggle to attract widespread support unless it includes government communications.

Preserving freedom of expression

The right to freedom of expression has been described as“the foundation stone for every free and democratic society”. While the International Covenant on Civil and Political Rights provides that this right may be subject to certain restrictions, it also provides that any restrictions may only be imposed where necessary. This includes“for respect of the rights or reputations of others”, or“for the protection of national security or of public order, or of public health or morals”.

The draft bill reflected this by requiring the communication to cause one of several pre-determined types of serious harm before it could be called mis- or disinformation. These included:

  • “disruption of public order or society in Australia”
  • “harm to the integrity of Australian democratic processes or of Commonwealth, State or Territory or local government institutions”
  • “harm to the health of Australians”.

But it then went one step further by including additional categories. One example was the inclusion of economic or financial harm. This could prevent alternative viewpoints around the cost of living, interest rates, superannuation and other key economic issues being freely discussed.

By equating economic harm with the likes of democratic integrity, the bill risked going further than the limited restrictions envisaged by the International Covenant on Civil and Political Rights. It appeared to prioritise financial interests over our intrinsic right to speak freely.

The practical effect of this could be the censorship of Australians for expressing opinions that unfavourably affect market trends or corporate reputations. This could include, for example, criticising a major company's environmental or human rights record or policies.

The impact of AI

Generative AI is fast becoming a key player in creating and spreading misinformation and disinformation. In the process, it makes our usual ways of defining, identifying and addressing misinformation and disinformation less relevant. Any laws attempting to regulate mis- and disinformation will need to address the impact of AI.

Leading technology companies already use AI technology to detect and filter misinformation and disinformation. The technologies are trained to detect misleading styles, comparing statements with external evidence and considering metadata such as the sharer's profile and attributes.

These technologies are not easily tweaked according to local definitions of misinformation and disinformation. Ensuring these technologies are subject to meaningful accountability and transparency measures is an additional challenge.

Challenges ahead

Combatting mis- and disinformation requires a clear and consistent global understanding of what they actually are.

The original bill, while aiming to address what is clearly a pressing public policy concern, attempted to unilaterally redefine misinformation and disinformation in a way that, in some key respects, diverged from previous definitions used in other jurisdictions.

Until there is broader agreement across jurisdictions on the meaning and harms of misinformation and disinformation, there are risks that well-intentioned regulation will lead to improper censorship. Australia's attempts to date provide a clear illustration of this challenge.

This article was written in collaboration with Žemyna Kuliukas, Associate at Wotton + Kearney.


The Conversation

MENAFN30082024000199003603ID1108618195


The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.