Tuesday, 02 January 2024 12:17 GMT

'Digital Brains' That 'Think' And 'Feel': Why Do We Personify AI Models, And Are These Metaphors Actually Helpful?


Author: Xosé López-García
(MENAFN- The Conversation) The press has always used metaphors and examples to simplify complex issues and make them easier to understand. With the rise of chatbots powered by artificial intelligence (AI) , the tendency to humanise technology has intensified, whether through comparisons to medicine, well-known similes, or dystopian scenarios.

Although what lies behind AI is nothing more than code and circuits, the media often portrays algorithms as having human qualities. So what do we lose, and what do we gain, when AI ceases to be a mere device and becomes, linguistically speaking, a human alter ego, an entity that“thinks”,“feels” and even“cares”?

Read more: ChatGPT's artificial empathy is a language trick. Here's how it works

The digital brain

An article in Spanish newspaper El País presented the Chinese AI model DeepSeek as a“digital brain” that“seems to quite clearly understand the geopolitical context of its birth”.

This way of writing replaces technical jargon – the foundational model, parameters, GPU, and so on – with an organ that we all recognise as the core of human intelligence. This has two results. It allows people to understand the magnitude and nature of the task (“thinking”) performed by the machine. However, it also suggests that AI has a“mind” capable of making judgements and remembering contexts – something that is currently far from the technical reality.

This metaphor fits into the classic conceptual metaphor theory of George Lakoff and Mark Johnson , which argues that concepts serve to help humans understand reality, and enable them to think and act. In talking about AI, this means we turn difficult, abstract abilities (“statistical computation”) into familiar ones (“thinking”).

While potentially helpful, this tendency runs the risk of obscuring the difference between statistical correlation and semantic understanding. It reinforces the illusion that computer systems can truly“know” something.

Machines with feelings

In February 2025, ABC published a report on“emotional AI” that asked:“will there come a day when they are capable of feeling?” The text recounted the progress made by a Spanish team attempting to equip conversational AI systems with a“digital limbic system”.

Here, the metaphor becomes even bolder. The algorithm no longer just thinks, but can also suffer or feel joy. This comparison dramatises innovation and brings it closer to the reader, but it carries conceptual errors: by definition, feelings are linked to bodily existence and self-awareness, which software cannot have. Presenting AI as an“emotional subject” makes it easier to demand empathy from it or blame it for cruelty. It therefore shifts the moral focus from the people who design and programme the machine to the machine itself.

A similar article reflected that“if artificial intelligence seems human, has feelings like a human and lives like a human... what does it matter if it is a machine?”

Read more: Humanising AI could lead us to dehumanise ourselves

Robots that care

Humanoid robots are often presented in these terms. A report in El País on China's push for elder-care androids described them as machines that“take care of their elders”. By saying“take care of”, the article refers to the family's duty to look after their elders, and the robot is presented as a relative who will provide the emotional companionship and physical assistance that was previously provided by family or nursing staff.

This caregiver metaphor is not all bad. It legitimises innovation in a context of demographic crisis, while also calming technological fears by presenting the robot as essential support in the face of staff shortages, as opposed to a threat to jobs.

However, it could be seen as obscuring the ethical issues surrounding responsibility when caregiving work is done by a machine managed by private companies – not to mention the already precarious nature of this kind of work.

The doctor's assistant

In another report by El País , large language models were presented as a doctor's assistant or“extension”, capable of reviewing medical records and suggesting diagnoses. The metaphor of the“smart scalpel” or“tireless resident” positions AI within the healthcare system as a trusted collaborator rather than a substitute.

This hybrid framework – neither an inert device nor an autonomous colleague – fosters public acceptance, as it respects medical authority while promising efficiency. However, it also opens up discussions about accountability: if the“extension” makes a mistake, does the blame lie with the human professional, the software, or the company that markets it?

Why does the press rely on metaphor?

More than a decorative flourish, these metaphors serve at least three purposes. First and foremost, they facilitate understanding. Explaining deep neural networks requires time and technical jargon, but talking about“brains” is easier for readers to digest.

Secondly, they create narrative drama. Journalism thrives on stories with protagonists, conflicts, and outcomes. Humanising AI creates these, along with heroes and villains, and mentors and apprentices.

Thirdly, metaphors serve to formulate moral judgements. Only if the algorithm resembles a subject can it be held accountable, or given credit.

However, these same metaphors can hinder public deliberation. If AI“feels”, then it stands to reason that it should be regulated as citizens are. Equally, if it is seen as having superior intelligence to our own , it seems only natural that we should accept its authority.

Read more: We need to stop pretending AI is intelligent – here's how

How to talk about AI

Doing away with these metaphors would be impossible, nor is it something we should strive for. Figurative language is the way human beings understand the unknown, but the important thing is to use it critically. To this end, we offer some recommendations for writers and editors:

  • First, it is important to add technical counterweights. This means, after introducing the metaphor, briefly but clearly explaining what the system in question does and does not do.

    It is also important to avoid giving AI absolute, human-like agency. This means phrases like“AI decides” should be qualified: does the system“recommend”? Does the algorithm“classify”?

  • Another key is to mention accountable, human sources. Naming developers and regulators reminds us that technology does not emerge from a vacuum.

  • Likewise, we should diversify metaphors and explore less anthropomorphic images – for example,“microscope” or“statistical engine” – which can enrich the conversation.

While“humanising” artificial intelligence in the press helps readers become familiar with complex technology, the more AI resembles us, the easier it is to project fears, hopes, and responsibilities onto servers and lines of code.

As this technology develops further, the task facing journalists – as well as their readers – will be to find a delicate balance between metaphor's evocative power and the conceptual precision we need if we are to keep having informed debates about the future.

This article was originally published in Spanish


The Conversation

MENAFN25092025000199003603ID1110109423

Institution:Pontificia Universidad Catolica de Valparaiso

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search