(MENAFN- Swissinfo) There's the horrific possibility of mutually assured destruction, which we have lived with for almost 80 years, and there's more than 60 years of little girls looking at that blonde, blue-eyed doll with the hourglass figure and wondering how they can ever look like that.Like it or not, these two inventions are here to stay. And now, in the first quarter of the 21st century, we are debating a new, and some say, potentially existential new invention: artificial intelligence (AI).
In this week's Inside Geneva podcast, we take a look at the possibilities for artificial intelligence, good, bad, and downright terrifying, with expert views from the UN, the industry, and academics.
When ChatGPT first hit the headlines last year, I was intrigued, but pretty ignorant. I hadn't given AI much thought up until that point. Testing ChatGPT was amusing, but weird. An instruction to“write a radio news story in the style of Imogen Foulkes” came up with something superficially convincing – 200 words on the UN coping with a humanitarian crisis in a conflict zone. But the crisis and the conflict were spread across widely different countries in a highly improbable scenario bearing no relation to any reality. No chance of AI taking my job quite yet then, I thought.
More
More
Inside Geneva: do we need rules for AI?
This content was published on Jul 25, 2023 Jul 25, 2023
Should the UN help to regulate AI? Could it even do that?
Pandora's box?But in March this year, some of the leading scientists working on AI wrote an open letter calling for a six-month halt to new developments, arguing that the best minds in AI no longer understood exactly how it worked, and that continuing could open a highly dangerous Pandora's box
Rumours swirled; a military AI“robot” had attacked its developer (since refuted). Some even suggested AI could become far more intelligent than us, and simply decide to use us as material for more useful things, leading, eventually to humanity's extinction.
Into this debate, this month, stepped the UN's International Telecommunications Union, with a long awaited and pandemic delayed“AI for Good summit". The publicity around the summit was unfailingly upbeat; AI could be harnessed to benefit all humanity. It could help us reach those elusive sustainable development goals – ending poverty, education for all, etc.
Once the summit actually opened, the mood was more muted. ''The technology by itself has a huge potential to help us resolve a lot of challenges of today, from climate change, to helping education to, helping in the health sector,” the ITU's Deputy Secretary General Tomas Lamanauskas told Inside Geneva. But, he warned,“as with every technology, this technology has risks”.
Can the UN regulate?So how to mitigate or even prevent those risks? The UN says AI needs“guardrails”, but it is unclear what exactly they should be, and whether they should be voluntary, or mandatory.
The ITU, uniquely among UN organisations, includes industry and leading academics as well as member states, and they were at the summit too. Lila Ibrahim, chief operating officer of Google's leading AI arm, DeepMind, took the time to reassure Inside Geneva that“from the very start of DeepMind, since 2010, we've been working on AI and thinking about 'how do we build this responsibly'. It's not something we just tag on at the end of all the research we've been doing.”
But she too was vague about whether industry would accept UN or other outside regulation, and since then, in what cynics might argue was a clever ploy to show that outside regulation isn't necessary, several big tech companies including Google have signed up to voluntary“principles” on good governance.
Real concernsGiven the real concerns about AI it was surprising, as my SWI swissinfo.ch colleague Dorian Burkhalter points out, that there were no human rights NGOs at the summit, and very few discussions on risks and possible regulations. Instead, the summit focused on the beneficial potential of AI, and it allowed developers to showcase their wares.
This, as you will hear on Inside Geneva, was a somewhat surreal experience. Entering the conference centre, robot dogs (use unclear) were trotting around. Cuddly seals who squeaked, barked affectionately, and batted their big moist eyes were, we were told, designed to comfort patients with dementia, who had suffered a stroke, or had cancer.
And then there was Nadine, a humanoid robot whom (which?) Burkhalter had the pleasure to interview. Her purpose in life, she told him,“is to help people by providing them with companionship, assistance and support.”
Nadine's developer, Professor Nadia Thalmann of the University of Geneva, has been testing Nadine in homes for the elderly in Singapore, where, she told us, the reaction was“very positive” Nadine can play bingo, sing the favourite songs of 50 or 60 years ago, and listen, apparently sympathetically, to the stories of lonely old people.
Thalmann believes Nadine could be a welcome addition to the often isolated lives of the frail and elderly. But she is also careful to stress that Nadine has been programmed never to suggest she is an actual human, and always to explain that she is a robot. And, she adds“her eyes are a bit cold.”
Conclusions?In the end, I wonder if our focus on the apocalyptic,“Oppenheimer” risks of AI have made us forget the more immediate“Barbie” ones. Should we really be using AI in social care? Or should we be taking our own societal responsibilities more seriously, and investing in social care so that it becomes a valued, respected, and properly paid profession that humans want to do?
What's more, as Peggy Hicks of UN Human Rights points out, while half of the world's population remains unconnected to the internet, who will it benefit? Don't forget the wonderful advances of anti-retrovirals to treat HIV Aids, and the Covid-19 vaccine – remember who got them first and who had to wait? Cutting edge technology tends to benefit the rich, and neglect the poor.
And, as Hicks tells Inside Geneva, since AI works on data, its“decisions” will be based on the data of people who are online, ie disproportionately wealthy, and disproportionately white. ''There are real problems with its ability to accelerate disinformation, and enhance bias.”
These are immediate risks, which, some warn, we could see play out in elections due next year in both the United States and the United Kingdom. Will there be any“guardrails” before then? It seems unlikely, because while everyone seems to agree regulations for AI are needed, there's no clear plan yet about what they should be, and how they should be enforced. Meanwhile, the development of AI continues apace.