Government Needs To Show That Its AI Plan Can Be Trusted To Deal With Serious Risks When It Comes To Health Data


Author: Jonathan R Goodman

(MENAFN- The Conversation) The UK government's new plan to foster innovation through artificial intelligence (AI) is ambitious. Its goals rely on the better use of public data, including renewed efforts to maximise the value of health data held by the NHS. Yet this could involve the use of real data from patients using the NHS. This has been highly controversial in the past and previous attempts to use this health data have been at times close to disastrous.

Patient data would be anonymised, but concerns remain about potential threats to this anonymity. For example, the use of health data has been accompanied by worries about access to data for commercial gain. The care programme , which collapsed in 2014, had an similar underlying idea: sharing health data across the country to both publicly funded research bodies and private companies.

Poor communication about the more controversial elements of this project and a failure to listen to concerns led to the programme being shelved. More recently, the involvement of the US tech company Palantir in the new NHS data platform raised questions about who can and should access data.

The new effort to use health data to train (or improve) AI models, similarly relies on public support for success. Yet perhaps unsurprisingly, within hours of this announcement, media outlets and social media users attacked the plan as a way of monetising health data.“Ministers mull allowing private firms to make profit from NHS data in AI push,” one published headline reads .

These responses, and those to care and Palantir, reflect just how important public trust is in the design of policy. This is true no matter how complicated technology becomes – and crucially, trust becomes more important as societies increase in scale and we're less able to see or understand every part of the system. It can, though, be difficult, if not impossible, to make a judgement as to where we should place trust, and how to do that well. This holds true whether we are talking about governments, companies, or even just acquaintances – to trust (or not) is a decision each of us must make every day.

The challenge of trust motivates what we call the “trustworthiness recognition problem” , which highlights that determining who is worthy of our trust is something that stems from the origins of human social behaviour. The problem comes from a simple issue: anyone can claim to be trustworthy and we can lack sure ways to tell if they genuinely are.

If someone moves into a new home and sees ads for different internet providers online, there isn't a sure way to tell which will be cheaper or more reliable. Presentation doesn't need – and may not even often – reflect anything about a person or group's underlying qualities. Carrying a designer handbag or wearing an expensive watch doesn't guarantee the wearer is wealthy.

Luckily, work in anthropology, psychology and economics shows how people – and by consequence, institutions like political bodies – can overcome this problem. This work is known as signalling theory , and explains how and why communication, or what we can call the passing of information from a signaller to a receiver, evolves even when the individuals communicating are in conflict.

For example, people moving between groups may have reasons to lie about their identities. They might want to hide something unpleasant about their own past. Or they might claim to be a relative of someone wealthy or powerful in a community. Zadie Smith's recent book, The Fraud, is a fictionalised version of this popular theme that explores aristocratic life during Victorian England.

Yet it's just not possible to fake some qualities. A fraud can claim to be an aristocrat, a doctor, or an AI expert. Signals that these frauds unintentionally give off will, however, give them away over time. A false aristocrat will probably not fake his demeanour or accent effectively enough (accents, among other signals, are difficult to fake to those familiar with them).

The structure of society is obviously different than that of two centuries ago, but the problem, at its core, is the same - as, we think, is the solution. Much as there are ways for a truly wealthy person to prove wealth, a trustworthy person or group must be able to show they are worth trusting. The way or ways this is possible will undoubtedly vary from context to context, but we believe that political bodies such as governments must demonstrate a willingness to listen and respond to the public about their concerns.

The care project, was criticised because it was publicised via leaflets dropped at people's doors that did not contain an opt-out. This failed to signal to the public a real desire to alleviate people's concerns that information about them would be misused or sold for profit.

The current plan around the use of data to develop AI algorithms needs to be different. Our political and scientific institutions have a duty to signal their commitment to the public by listening to them, and through doing so develop cohesive policies that minimise the risks to individuals while maximising the potential benefits for all.

The key is to place sufficient funding and effort to signal – to demonstrate – the honest motivation of engaging with the public about their concerns. The government and scientific bodies have a duty to listen to the public, and further to explain how they will protect it. Saying“trust me” is never enough: you have to show you are worth it.


The Conversation

MENAFN03022025000199003603ID1109163113


The Conversation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.