Genai Needs To Gain Cultural Fluency Like Humans: Amazon CTO


(MENAFN- IANS) Las Vegas, Dec 1 (IANS) In the coming years, culture will play a crucial role in how generative Ai is designed, deployed and consumed, Amazon CTO Dr Werner Vogels has said.

Large language models (LLMs) trained on culturally diverse data will gain a more nuanced understanding of human experience and complex societal challenges in 2024.

"This cultural fluency promises to make generative AI more accessible to users worldwide,” Vogels said during the AWS 're: Invent' conference here.

For LLM-based systems to reach a world-wide audience, they need to achieve the type of cultural fluency that comes instinctively to humans.

In a paper published earlier this year, researchers from Georgia Institute of Technology demonstrated that even when an LLM was provided with a prompt in Arabic that explicitly mentioned Islamic prayer, responses were generated that recommended grabbing an alcoholic beverage with friends, which isn't culturally appropriate.

According to Vogels, a lot of this has to do with the training data that's available.

“Common Crawl, which has been used to train many LLMs, is roughly 46 per cent English, and an even greater percentage of the content available - regardless of language - is culturally Western (skewing significantly towards the United States),” he noted.

In the past few months, non-Western LLMs have started to emerge: Jais, trained on Arabic and English data, Yi-34B, a bilingual Chinese/English model, and Japanese-large-lm, trained on an extensive Japanese web corpus.

“These are signs that culturally accurate non-Western models will open up generative AI to hundreds of millions of people with impacts ranging far and wide, from education to medical care,” the Amazon CTO emphasised.

Just as humans learn from discussion, debate, and the exchange of ideas, LLMs need similar opportunities to expand their perspectives and understand culture.

According to him, two areas of research will play a pivotal role in this cultural exchange.

One is reinforcement learning from AI feedback (RLAIF), in which a model incorporates feedback from another model.

In this scenario, different models can interact with each other and update their own understandings of different cultural concepts based on these interactions.

“Second is collaboration through multi-agent debate, in which multiple instances of a model generate responses, debate the validity of each response and the reasoning behind it, and finally come to an agreed upon answer through this debate process. Both areas of research reduce the human cost it takes to train and fine-tune models,” Vogels noted.

As LLMs interact and learn from each other, they will gain more nuanced understandings of complex societal challenges informed by diverse cultural lenses, he added.

--IANS

na/

MENAFN01122023000231011071ID1107521829


IANS

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.