New study says AI poses no existential threat to humanity

A new study has found that the large language models (LLMs) currently forming the basis of artificial intelligence do not pose an existential threat to humans.

Adobe Stock - AI generated

As LLMs have grown exponentially more powerful in recent years, the possibility of them acquiring ‘emergent’ capabilities such as reasoning or planning has been an ongoing concern. Indeed, the largely ‘black box’ way in which LLMs operate has fed into this idea, with even those working on AI often unable to explain some of the capabilities that LLMs such as ChatGPT have been able to display. 

 

 

However, the new study claims to have put these fears to ease, demonstrating that there is no reasoning taking place within LLMs. According to the research, what sometimes appears to us as reasoning is in fact simply in-context learning (ICL), where LLMs can ‘learn’ new information based on a set of examples provided to them. Over the course of thousands of experiments, the research team showed that the ability to follow instructions (ICL) - combined with memory and linguistic proficiency - accounts for both the capabilities and limitations exhibited by today’s LLMs.

Register now to continue reading

Thanks for visiting The Engineer. You’ve now reached your monthly limit of news stories. Register for free to unlock unlimited access to all of our news coverage, as well as premium content including opinion, in-depth features and special reports.  

Benefits of registering

  • In-depth insights and coverage of key emerging trends

  • Unrestricted access to special reports throughout the year

  • Daily technology news delivered straight to your inbox