Tuesday, 02 January 2024 12:17 GMT

How AI's Language Barrier Limits Climate Disaster Responses


Author: Ifeoluwa Wuraola
(MENAFN- The Conversation) A message appears online during heavy flooding:“This rain no be small o, everywhere don red.” Someone unfamiliar with the phrasing might hesitate. But for people in Nigeria, this message is immediate and clear: the flooding is severe and worsening.

Moments like this happen all the time on digital platforms. People don't write in perfect, standard English sentences. They share warnings and reactions on platforms like X, WhatsApp and Facebook using the language of everyday life. This means sometimes mixing English with local expressions, slang and expressive language shaped by their communities.

Artificial intelligence systems can understand language and tackle a wide range of problems. Governments and organisations are increasingly using AI to scan social media, summarise public conversations, and even respond to environmental and climate issues.

But many of these tools struggle to make sense of the way people actually communicate. Local expressions and slang can confuse AI, so important messages are sometimes misunderstood or missed entirely.

When people talk about language barriers, they often mean translation between different languages. But the problem is more subtle. Around the world, people mix languages and local expressions online, a phenomenon that linguists call“code switching”.

Climate journalism has increasingly moved online, but there are fewer climate reporters in the developing world. This limits the depth and availability of information for a huge proportion of the global population, and shapes how climate issues are discussed and understood across different regions.

For instance, a UK social media post might raise an environmental concern using expressions like:“Are roads flooding already? Chuffed to know the council taking the piss.” Most AI tools can pick up the sarcasm and frustration aimed at local authorities.

In a country such as Nigeria, people may describe unfolding concerns differently:“Abeg is it October wey rain dey fall like this, but you say the climate no change?” or“River don near our house o! Abeg help, e fit spoil everything!”

Here, slang and Pidgin express immediate danger and an urgent call for help. Yet AI models often diminish this to casual commentary, entirely missing the urgency and emotion that is being conveyed.

This matters because most AI systems are taught on large western-centric text, mainly from North America and Europe. ChatGPT, for example, is instructed on huge amounts of internet text. It doesn't have beliefs, feelings or awareness. Instead, it generates responses based on patterns it has seen online.

AI reflects the dominant culture in its training data, so carries a “cultural fingerprint”. It imitates normal ways of expressing ideas from the societies that produced the texts it has learned from. AI models trained on predominantly English-language texts show a hidden bias that favour western cultural values, particularly when asked in English.

One major reason AI can produce biased outcomes is that it reflects the societal inequalities including differences in race, gender and region that show up in the data it learns from. So, underrepresented voices from communities in developing countries with non-Anglocentric varieties of English are often diminished or ignored.

This bias can have real consequences. In climate crises like floods, heatwaves or other extreme weather, misinterpreted messages could put property and lives at risk.

AI systems that rely on past patterns are easy to interpret when language fits expected standards, but posts that don't conform with the presence of local slang or urgency cues can be misinterpreted.

Improving climate disaster responses

Solving this problem involves designing systems that actually reflect the way people communicate. AI systems need to be trained to understand regional expressions and recognise that meaning often depends on cultural context, not just literal words.

AI should be tested on real online posts, not formal western-centric English, to capture urgency and local references. Automated systems can process huge volumes of information, but human judgment must remain in the loop – especially when people's safety is at stake.

AI tools can help communities respond to floods, heatwaves and other climate emergencies – but only once trained to interpret the nuance of everyday language, so that warnings and calls for help get through.


The Conversation

MENAFN09042026000199003603ID1110962248


Institution:Loughborough University

The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search