Three researchers, two of them Canadian, have won the world's top award in computer science for developing the ability of computers to learn like humans, by imitating the human brain and how it functions using networks of "neurons."
That allows computers to acquire new skills by looking at lots of examples and finding and recognizing patterns, as humans do.
Machine learning — based on "deep learning" and "neural networks" — has led to the development of artificial intelligence that now powers everyday web and smartphone applications from voice, image and facial recognition to language translation. It's increasingly being used in more complicated tasks like generating art, creating text and diagnosing cancer from images.
The Turing Award is described by the Association for Computing Machinery, which hands out the annual award, as the "Nobel Prize of Computing" and worth $1 million US. The association announced Wednesday that the 2018 award goes to:
Yoshua Bengio, professor at the Université de Montréal and scientific director of Mila, Quebec's Artificial Intelligence Institute.
Geoffrey Hinton, emeritus professor at the University of Toronto, vice-president and engineer fellow at Google, and chief scientific advisor at the Vector Institute.
Yann LeCun, professor at New York University and vice-president and chief AI scientist for Facebook, who did his postdoctoral work at Hinton's University of Toronto lab and then worked with Bengio at Bell Labs.
The Turing Award is named after British mathematician, computer scientist and Second World War codebreaker Alan Turing. It has been sponsored by Google since 2014, but the company said it's not involved with the selection committee, which honours "lasting contributions to the field of computer science."
'It's not actually flaky'
"Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society," said ACM President Cherri M. Pancake. "The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation."
Bengio said he was amazed to hear he had won.
"I think I'm still carried by the energy of that news. Of course, it's great for me, but it's great for Canada, it's great for the area of research which is getting this recognition," he said, adding that the award will help attract students, investments and research funding in the areas of neural networks and deep learning.
Hinton said he was very pleased to hear he had won the award "because for a long time, neural networks were regarded as flaky in computer science. This was an acceptance that it's not actually flaky."
While the first computers in the 1940s were designed with human neurons in mind, researchers in the 1960s showed that simple neural networks had very significant limitations, Hinton said. Meanwhile, the slow computers at that time could do "more interesting things" if you programmed them to do exactly as they were told instead of asking them to learn on their own.
Despite that, he, Bengio and LeCun worked on neural networks in the 1980s and 1990s and had some early successes.
In the 1990s, Bengio and LeCun got AI to recognize handwritten numbers on cheques.
Training vs. programming
But it wasn't until computer scientists started putting together larger datasets to train computers, and processing speeds got significantly faster, that the technology really started to take off.
In 2012, Hinton and his students used a deep learning system to beat out hand-programmed computers in an image recognition competition called ImageNet.
"It was so much better than other systems that initially people who were evaluating our system thought must have made a mistake," Hinton recalled. "The trick with our system was everything was learned."
Hinton said that since around 2010, it has actually been better to train computers by using the approach you would use with humans — by giving them examples.
"And they just sort of get it after awhile. You can tell them the rules, but it tends to work better to just show them examples," he said. "Now if you want a computer to do something complicated, such as telling whether some blob in an x-ray might be cancer, you're much better to train it to do it than to program it to do it."
Hinton said the research on machine learning also suggests that the way humans learn isn't through logic and reasoning: "What we're learning is far more like massive analogy making."
Bengio describes the type of learning neural networks display as "a form of intuition."
AI based on deep learning has led to computers that do more than just identify things. One of the areas that Bengio has focused on is Generative Adversarial Networks, where computers show they have learned something by coming up with a completely new example.
"People thought, 'Oh, AI will never have creativity. It's a uniquely human thing,'" Bengio said. "But actually, we are seeing that it is possible to make progress towards creativity."
So far, computers are not as creative as humans, because their understanding of the world is not even as good as a two-year-olds, he added.
Neverthless, a painting created with this kind of technology last fall sold for $432,500 US in a Christie's auction, 45 times higher than anticipated.
Concerns about misuse
And it's already raising concerns.
Earlier this year, OpenAI, the creator of an AI text generator called GPT-2, said they would not release the tool because it could write authentic-sounding prose that could fool humans, and could be used for the mass production of disinformation.
When asked about the incident and whether he had such concerns, he said: "Eventually, but I'm not sure we have reached that stage. So maybe there's a little bit of marketing going on here."
However, he acknowledged that with AI being deployed and applied in so many places, scientists and companies now have to be more careful about how it's going to be used; people and governments are paying attention to whether applications are socially acceptable and ethical.
In the meantime, even the researchers themselves are astonished by the speed at which AI based on deep learning is advancing and being applied.
Both Hinton and Bengio say are especially surprised by how much translation between different languages has improved in the past two years. They thought it would take many more years for computers to reach this level of competence at translation.
Bengio said the rapid spread and adoption of deep learning is partly due to the creation of free, publicly available "software libraries" for others to use.
"It means that people with very little training and very little understanding of what is under the hood can use these libraries and apply them in many many ways," he said. "It's really democratizing access to that science."
He said he believes he, Hinton, LeCun and other academics in the field "have influenced the industrial world with some of the cultural traits of academia" such as greater openness and sharing.
He added, "That is helping the progress of science."