GPT-5: Has AI Just Plateaued?
According to OpenAI's own definition, AGI would be“a highly autonomous system that outperforms humans at most economically valuable work.” Setting aside whether this is something humanity should be striving for, OpenAI CEO Sam Altman's arguments for GPT-5 being a“significant step” in this direction sound remarkably unspectacular.
He claims GPT-5 is better at writing computer code than its predecessors. It is said to“hallucinate” a bit less, and is a bit better at following instructions – especially when they require following multiple steps and using other software. The model is also apparently safer and less“sycophantic”, because it will not deceive the user or provide potentially harmful information just to please them.
Altman does say that“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” Yet it still doesn't have a clue about whether anything it says is accurate, as you can see from its attempt below to draw a map of North America.
It also cannot learn from its own experience, or achieve more than 42% accuracy on a challenging benchmark like“Humanity's Last Exam”, which contains hard questions on all kinds of scientific (and other) subject matter. This is slightly below the 44% that Grok 4, the model recently released by Elon Musk's xAI, is said to have achieved .
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.







Comments
No comment