Author:
Simon Rogerson
(MENAFN- The Conversation)
The progress of artificial intelligence (AI) has been relentless. With OpenAI's latest model, o3 , recently breaking records yet again , it raises urgent questions about safety, as well as the future of humanity.
One place we can turn for help is to great thinkers from the past. They explored beyond the obvious in their worlds and often looked into the future, foreseeing a time when machines would have AI-like capabilities.
The English 19th century mathematician and writer Ada Lovelace is sometimes recognised as the first computer programmer for her work with the polymath Charles Babbage on his“analytical engine”. This was a general purpose mechanical computer, which was never completed, but its design mirrored that of computers decades later.
Charles Babbage's analytical machine.
Wikimedia , CC BY
Her 1842 notes to Babbage , exploring the potential of his proposed device, foresaw something akin to AI in future.“It might act upon other things besides number”, she said, suggesting that such a machine could one day express relationships between pitched sounds in order to“compose elaborate and scientific pieces of music of any degree of complexity or extent”.
This requires pattern recognition across a vast array of sound and music data – exactly what large language models are doing today by generating music from text prompts.
All the same, Lovelace was sceptical about the machine's thinking capabilities, arguing it would still be dependent on humans to originate whatever it could come up with. Indeed, AI models today are still not really thinking, so much as building sentences based on mathematical probabilities from being trained on trillions of human words from the internet.
Lovelace pointed to such limitations to“guard against the possibility of exaggerated ideas that might arise as to the powers of the analytical engine”. However, she also emphasised the“collateral influences” this machine could have beyond its bare output. Her example is that it could shed new light on science, but the wider implication is that such devices must never be underestimated.
The Turing test
Lovelace's argument also raised another implicit question. What happens if and when the machines do become the originators, once sentience is no longer science fiction? This inspired another English mathematician and thinker a few decades later, Alan Turing.
Turing's 1949“imitation game” , later known as the Turing test, sought to determine whether a computer could think in a way comparable to a human. It remained a key test of AI until it was considered surpassed by OpenAI's ChatGPT in 2022.
Turing actually thought this would happen sooner, writing in a famous 1950 paper :
Alan Turing (1912-54).
Wikimedia , CC BY
He wasn't especially pessimistic about what crossing this rubicon would mean, arguing in the same paper in favour of trying to create a machine that simulated a child's mind rather than an adult's. He thought this could be“easily programmed”, implying we had little to fear from such endeavours.
Equally, he wasn't blind to the potential for humans to end up subordinated by thinking machines. In a public lecture in 1951 , he remarked:“If a machine can think, it might think more intelligently than we do, and then where should we be?”
Turing's biographer , Christof Teuscher, described him as an“Orwell of science”. It's interesting to contrast his views with George Orwell himself, who despite never pondering AI, did talk about the dangers of machines more generally in The Road to Wigan Pier (1937).
If you are prepared to indulge swapping out the references to“machines” for“AI”, it offers interesting possibilities about what Orwell might have made of today's technological arms race:
Norbert Wiener's ethics
Norbert Wiener (1894-1964).
Granger - Historical Picture Archive
This brings us to the American scientist and mathematician Norbert Wiener. Recognised as the founder of computer ethics, Wiener's seminal work is The Human Use of Human Beings (1950), which aimed to“warn against the dangers” of exploiting machines' potential.
Wiener foresaw a time when machines would be talking to one another, and improve over time by being able to keep track of their past performances.
Comparing it to the old folk tale of a person finding a djinnee (genie) in a bottle and knowing it was better left there, he wrote:
Decades later, the English physicist Stephen Hawking had similar concerns. He wrote in 2016 that AI could be:
In his final months, he wrote:
These five giants of the past prompt us to think very carefully about AI. Lovelace talked about a human tendency to first overrate the potential of a new technology, only to later over-correct by underestimating the reality. Wiener warned against the“selfish exploitation” of untested technological potential, which has surely led us to numerous catastrophic outcomes from IT failures over the years.
Clearly the same thing could now happen with a much more powerful technology. It's likely that these writers would have looked at recent developments and seen fools rushing in where angels fear to tread.
MENAFN13012025000199003603ID1109084179
Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.