One of the aspects of artificial intelligence which has excited experts is so-called ‘emergent abilities’, where large AI models suddenly gain skills they are not supposed to have.
These skills can range from doing multiplication to doing common-sense reasoning to being able to guess a movie title from a series of emojis.
None of these are things the AI has been designed to do, and some experts have suggested this could lead to the emergence of greater levels of ability in AI systems - or even the arrival of an AGI (an Artificial General Intelligence, capable of human-like reasoning).
Microsoft researchers suggested that OpenAI’s’ GPT-4 showed ‘sparks of artificial general intelligence’ and claimed that the AI could ‘solve novel and difficult tasks…without needing any special prompting’.
Emergent abilities appear only in large models (such as GPT-4, the model behind the paid version of ChatGPT) and have fascinated researchers.
But do these abilities mean that an artificial general intelligence is around the corner, with all that might mean for the human race?
AI researcher calls for 'immediate worldwide ban' (Business Insider)
AI to reach $1,345 billion by 2030 (MarketsandMarkets)
What are emergent abilities?
Yahoo News spoke to AI expert Caroline Carruthers, Chief Executive at Carruthers and Jackson and a former Chief Data Officer for Network Rail.
Carruthers said: "An emergent capability is something we didn't expect from one of the AI models.
"We might have trained it for a particular model to solve a particular problem. And it can suddenly and unexpectedly solve a different problem that we didn't expect it to be able to."
Why are people interested?
People are fascinated by emergent abilities because we still know relatively little about how models like ChatGPT arrive at the answers they do, Carruthers says.
"Our understanding is still pretty imperfect," Carruthers says.
She said, "Depending on how complex your question is, it can be a little bit like a ‘black box’ where we don’t understand how it got to the answer.
"The more complex the question, the harder it is to figure out. That’s why I don’t recommend using Generative AI for anything you have to explain."
Is this a sign of human-like intelligence?
But the emergent abilities of AI are not a sign that the software has started to think or evolve under its own steam, Carruthers says.
"Personally, I think it’s a little like human instinct," Carruthers says. "In the same way as my instincts can take over when I have done something similar before, it’s just a pattern that is recognised.
"I think emergent AI is similar: somewhere in there, we have given it a capability: there’s something familiar enough for it to develop that skill. At the moment, we still don’t understand how it’s doing everything."
Research by Stanford experts earlier this year suggested that ‘emergent’ abilities were a ‘mirage’ and that their appearance depended on how performance was measured.
But Carruthers says that the more we understand about how emergent models arrive at their conclusions, the more we will be able to get out of them.
"At the moment, we are being limited by our own imagination,’ she says. ‘It is pushing our understanding as much as we are pushing it. It is an incredible time for our development, as well as the development of AI."