TIME100 AI 2024: Yoshua Bengio
Credit - Photo-Illustration by TIME (Source: Courtesy of Yoshua Bengio)
Yoshua Bengio, one of the most-cited researchers in AI, is deeply concerned about the dangers that future artificial intelligence systems may pose. A longtime professor at the University of Montreal and founding director of the MILA - Quebec AI Institute, Bengio—often referred to as one of the “godfathers” of AI—has been balancing his time between raising awareness of the risks of advanced AI and furthering his research on how to curb them.
Bengio is a sober voice on extreme risks. According to his research, these include “large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems.” In November 2023, he was appointed to chair the International Scientific Report on the Safety of Advanced AI, which convenes 75 AI experts, including an advisory panel nominated by 30 nations, the E.U., and the U.N. The interim report, delivered at the Seoul AI Summit in May 2024, comprehensively synthesized existing scientific research on the capabilities and risks of advanced AI, covering issues from biases and disinformation to national security and concerns about society losing control of the technology.
TIME spoke with Bengio in mid-July about the debates around AI risk, the role of policymakers and the ethics of improving advanced AI models.
This interview has been condensed and edited for clarity.
What has it been like transitioning from focusing on “pure science” to filling more of a policy advocacy role?
I'm continuing to do science, except that it's shifted, and a lot of what I'm spending my brain cycles on is machine learning, ways to address safety: particularly how to design AI systems which eventually might be smarter than us, but that will behave well and not harm people and so on.
But yes, I'm also spending a lot of time on, let's say, the political questions, regulation, talking to the media, giving talks about why we need to pay attention to these questions, being involved in international organizations, talking to governments around the world and all that.
I've heard you cite geo-engineering as an example of a time when we decided not to develop or use a technology in the face of high uncertainty and possible harm. And I've heard people make similar arguments about human cloning.
And bioweapons! I mean, there's a reason why we don't see humans—like countries or bad actor states—using bioweapons: because they're afraid it turns against them. You know, it could mutate and be transmitted to their own people and so on.
Given that we hit the brakes on going down those down paths, why do you think the same isn't true in AI?
I wish I knew. I'd like to write a blog post on the psychological reasons why we collectively behave the way we do. I think humans are very good at cognitive dissonance—harboring inconsistent beliefs and views and plans when it allows them to achieve goals, not lose face, maintain their ego, fit into groups, compete with their opponents. There are lots of reasons that make us not perfect reasoners.
I notice that the AI ethics community seems to worry about things like misinformation and disinformation, bias, copyright issues, labor impacts, and maybe cybersecurity; whereas the AI safety community is more concerned about existential risks. Why do you think there's so much tension between these groups?
First, there shouldn't be, because at the end of the day, we're saying we need the requirements of protecting the public to enter into how we take those decisions with AI. That means governments, or even the international community, needs to have a say in all that. That's the most important thing that we must do to avoid both of these short-term and longer-term problems.
Why is it that there is this sort of antagonism? I don't know because I've been an advocate of the human rights issues with AI for much longer than I've been talking about the existential risk. I don't see them as in opposition. On the contrary, it's like climate change adaptation versus mitigation: one is shorter-term, the other is longer-term. And yes, of course, we need to do both.
[There’s an] idea that there's a competition in the minds of people and governments between these goals. That isn't a very rational argument, because there are so many issues that we need to deal with. And we're not saying, “Okay, you're not supposed to talk about that one, because it's going to detract our attention from this one.” No! The world is full of problems.
But I think, from a psychological perspective, if you've been spending many years of your life fighting for a cause, and then somebody starts talking about something different, you might feel personally like “oh, wait, wait, we should be talking about my thing.” So I don't think it's rational, but I think it's understandable.
It strikes me that governments have limited attention spans sometimes. Does this mean that there is a crowding-out effect, where there is limited space for what they can pay attention to?
Actually, it's the opposite I think that is happening. Look at the E.U. AI act, or the Executive Order from Biden on AI. Both clearly cover all the risks. I don't think you'll see a political solution that doesn't. In addition, if you care about the most extreme, catastrophic risk, you also have to care about the human rights issues. And there's a reason for this—not just because strategically, we have objectives in common in terms of getting governments involved, but also because if we lose democracy, we're cooked! Not just at the level of human rights and democratic values, but a dictator is probably not going to be very good at dealing with existential risk. They’re not going to surround themself with a group of people who actually are honestly saying what can go wrong. It's obvious: you can look at what happened with Putin. If you just have yes men around you, you're going to take the wrong decisions, and those wrong decisions could be catastrophic for humanity. So you have to work together.
Given the risks involved, do you think it's unethical to work on advancing capabilities [of the most advanced models] at a frontier AI lab?
Wow. It's a tricky question. In my opinion, in order to advance AI safety, you need some “purely safety” things, like thinking about what can go wrong and how to mitigate it. But in order to actually mitigate it, you're going to need machine learning advances. So, one way to think about this is in order to make sure the AI understands what we want and also understands the consequences of its actions, it needs to be really good. But, of course, it's not a sufficient condition; it's a necessary condition. So I think working on capability is okay if you are also matching the development in safety. And that isn't what is happening, in my opinion, in the leading companies.
Contact us at letters@time.com.