Tuesday, 02 January 2024 12:17 GMT

CHATGPT CAN WRITE COMPLEX CODE BUT CAN’T WIN A GAME OF TIC-TAC-TOE


(MENAFN- Qatar Foundation ) Doha, Qatar, February 6, 2023: Most, if not all of us, have tried ChatGPT by now. In just a matter of weeks after its launch it became the internet’s best-known language processing artificial intelligence model. Seemingly something out of science fiction, the chatbot is capable of much more than just having conversations. It can also write complex code, has passed a law exam, and much more. But despite how “smart” it seems at first sight; it repeatedly loses a simple game of tic-tac-toe and doesn’t even realize it until told. How and why so?

“Chatbots like ChatGPT (Generative Pre-Trained Transformer – a type of Artificial Intelligence model) are more knowledgeable than they are smart, for now,” said Dr. Sanjay Chawla, Research Director at Qatar Computing Research Institute’s Data Analytics Department, which is part of Qatar Foundation’s Hamad Bin Khalifa University.

Explaining how such chatbots work, he said: “They are powered by large amounts of data and computing techniques to make predictions about stringing words together in a meaningful way. In addition to having encyclopedic knowledge, they also understand words in context, and this is why they are able respond in natural-sounding language that can be easily mistaken for a human. But at the end of the day, what they do largely is regurgitate existing information and aren’t particularly skilled when it comes to deductive reasoning and critical thinking.”

And despite how confidently it responds to questions, it does make mistakes. Open AI – the developer of ChatGPT – recognizes this and mentions on its own website that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."

Despite the skepticism, Dr. Chawla is quick to admit that ChatGPT is certainly “a step up from other previous chatbots”, “an amazing technological feat” and “a game changer in many ways”. Commenting on one of its most discussed strengths – content generation – he said: “it can definitely increase people’s productivity, but I wouldn’t go as far as saying it will replace content creators. Simply because the text it generates can be quite sterile and lacks the human touch; so yes, it can provide the framework and save users a considerable amount of time in doing so but it lacks individuality and that’s where humans come in.”

Another challenge he identifies is that it uses multiple sources when it generates content making it very hard to attribute content to its source – raising questions of credibility and accuracy.

He agreed that ChatGPT is a wakeup call in many ways. For the last decade, we had been hearing about how AI will change the world as we know it and now, many of us for the first time, are starting to see how.

“It is naïve to deny the changes AI will inevitably bring. Wwhat we need to do instead is show agility and adapt across every sector of society. This includes training professionals on how to best use AI to increase their productivity, upskilling single-skilled workers whose jobs are most likely at risk to be replaced by AI, and very importantly – incorporate AI into the country’s existing educational model,” said Dr. Chawla.

When asked if he thinks AI is a friend or a foe when it comes to education, Dr. Chawla said: “Ultimately, it’s a friend but clearly the global education systems needs to be rethought. Testing students on their capacity to regurgitate will have to be a thing of the past, and the focus needs to be on testing methods that require critical thinking and deductive reasoning. It’s also important to teach students on how to use tools like ChatGPT, how to word their prompts and how to work with the data so that when they graduate and enter the workforce they are not faced with a skills gap.”

One thing that Dr. Chawla expressed concern over when it comes to AI is how most of it is developed and trained by a select few big tech companies. “So far, the development of chatbots has been with a few big tech companies, meaning what the models are being taught and by whom is quite restrictive. This can, unconsciously, result in bias and hence there is a need for legislation to be developed on data access and data sharing when it comes to such models.”

Speaking specifically on how AI impacts a country with a small population like Qatar, Dr. Chawla said: “If done right, it can be tremendously beneficial. A smaller population means a smaller workforce that needs training and upskilling and potentially less time needed for the country to transform into an AI augmented one, whether it’s in the workplace or in schools.

“An AI augmented workforce will mean the mundane and repetitive tasks will no longer need to be done by humans, leaving them time for more complex tasks and higher productivity which can help drive innovation and power a knowledge-based economy.”

MENAFN06022023004929011400ID1105529945



Qatar Foundation

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search