Identifying Risks and Regulation Pathways of AI: Yoshua Bengio / Scientific Director, Mila – Quebec AI Institute

Rapid development in AI technology has triggered concerns about its risks, such as deep fakes and rogue AI. Landmark AI rules to address these issues have recently been adopted in the EU. However, the debate on regulation is still ongoing. What are the emerging AI risks? And how can we regulate them to avert potential crises? Yoshua Bengio, founder and scientific director of Mila – Quebec Artificial Intelligence Institute, explains.

Del Irani
DEEPER LOOK Host

Del Irani (left), Yoshua Bengio (right)

Transcript

00:13

Today we're in Montreal, Canada.

00:16

We're here to visit one of the global centers for AI research and development.

00:20

Hello and welcome to DEEPER LOOK.

00:22

I'm Del Irani. It's great to have your company.

00:25

The release of ChatGPT marked a groundbreaking milestone in the development of AI.

00:31

Simultaneously it's triggered concerns about its risks and sparked discussions on prevention measures.

00:37

The world's first AI rules to address these risks have recently been adopted in the EU.

00:43

But the regulation debate is still ongoing.

00:46

So what are the emerging realities of AI risks?

00:50

And how should we regulate it to avert potential crises?

00:56

Joining us to answer these questions is a man often referred to as the godfather of AI, Yoshua Bengio.

01:02

He's a laureate of the 2018 AM Turing Award known as the Nobel Prize of Computing,

01:08

which he received alongside Geoffrey Hinton, and Yann LeCun.

01:12

He's also the founder and scientific director of Mila - Quebec Artificial Intelligence Institute.

01:19

Welcome to the program, Professor Bengio. Great to have you with us.

01:21

It's good to be with you.

01:23

So, about a year ago, you first signed an open letter, along with prominent experts expressing your concerns over AI.

01:30

Recently, in February of this year, you signed another open letter, again,

01:34

restating your concerns about regulation and the risk to society,

01:39

how have things changed over the last year?

01:42

Incredibly.

01:43

I don't think that the people who signed the first letter a year ago

01:48

expected that there will be so much global discussion about AI risks.

01:54

Now, I think we still have a long way to go for more people and governments to understand those risks.

02:00

But the discussion and the, you know, the number of organizations in the world, international organizations, governments...

02:09

they are starting to realize that AI is a topic that requires governmental attention, is much higher than what we expected.

02:22

So in this letter in February, one of the things that you and your contemporaries were all talking about are deep fakes.

02:29

Tell us a little bit about deep fakes. How do they work?

02:32

And how does this fit into the topic of misuse as a risk?

02:36

So, deep fakes already exist.

02:38

You can use these generative AI systems to produce content, images, sounds,

02:47

including, you know, people speaking and imitating the voice of someone, and videos.

02:53

And they're becoming more and more realistic and difficult to distinguish from the real thing.

02:58

So, we're already seeing, you know, 10 times or 100 times more of these used for bad purposes;

03:10

fraud, child pornography, and growing but still not, you know, at the same kind of level,

03:20

the use of these for influencing people in terms of their political opinion.

03:25

And that is the most scary in a way because it could destabilize democracies.

03:30

Now, that is scary. But there is...

03:35

this may be just a tip of the iceberg because we - we, I mean, humans now have access to another tool.

03:42

In addition to voices, images of real people saying things they wouldn't do and say otherwise,

03:50

it's the ability to have a personalized interaction with a particular person

03:56

and use persuasion, psychological manipulation to make them change their mind on something.

04:04

But really the danger is, we may not be ready individually to resist that power of influence.

04:14

And those systems could learn from millions or hundreds of millions of interactions

04:19

and potentially even get better than humans at influencing other humans.

04:24

So, deep fake is one area of misuse.

04:27

But more broadly, what are some of the other areas of misuse?

04:31

I mean, why? Why is this a big risk?

04:33

Okay, so one of the other areas that national security agencies around the world are worried about is cybersecurity.

04:42

To understand this, you have to realize that there's a rapid increase in the ability of AI systems in terms of programming abilities.

04:54

Right now, they're not as good as the best programmers, but we can see the trend.

04:59

And it might be, you know, a year, or two years, five years from now, there'll be better than the best programmers.

05:05

But the problem is that same tool is dual use, it could be used by terrorists by,

05:13

you know, maybe some rogue governments to attack democracies, to attack some targets through cyber attacks.

05:24

And if those AI systems are better than humans at finding,

05:30

you know, how that could be done, this is what a cyber attack is, it could defeat our current cyber defenses.

05:36

Then, there's also a lot of concern, maybe not with a current versions, but again,

05:42

looking, anticipating in a few years from now that these AI systems could help bad actors to design new weapons.

05:51

So, bioweapons, chemical weapons.

05:56

There is already a number of experiments suggesting that we are going in that direction.

06:01

One other risk that I really want to touch on that you've identified as significant

06:07

is AI getting out of control, otherwise known as rogue AI?

06:11

Can you tell us about the scenario? What does it look like? What is rogue AI?

06:18

The way that we're training our AI systems, so that they learn to do things to be agents,

06:25

is by rewarding them positively when they do what we want, and negatively when they don't.

06:31

That's how ChatGPT is trained to be gentle and helpful.

06:37

But that same procedure can become dangerous when the AI systems become smarter than us.

06:44

So think about how we use the system of rewards to train like a grizzly bear.

06:50

What companies are trying to do in a way is to build a cage, make sure that the most powerful AI that will come, you know, one day,

06:59

cannot turn against us, cannot escape that cage.

07:03

To go back to the grizzly bear, when we think about why would it, you know, turn against us?

07:07

Because we give the rewards.

07:10

So it's like in the case of the bear, maybe like we give it fish through the bars of the cage.

07:15

And you know, with the cage, we're good, and it will learn to do the things we want.

07:19

And it's gonna get better and better at doing the things we want

07:23

until it gets smart enough that it finds a plan to escape the cage and just grab the fish directly from our hands.

07:31

In other words, take control of the reward mechanism, and then it doesn't need us.

07:36

It, you know, and that's a scenario that's just a natural consequence of how we're building AI right now.

07:43

So, there's urgency to find other ways of building powerful AI systems that would be safe.

07:50

And we really need to either slow down the advances of AI capabilities.

07:58

Or find a way to build AI that will be safe.

08:02

Yeah. Professor Bengio, what you described, you know about the caged animal that breaks free, or even the words "rogue AI."

08:10

It sounds like something out of a sci-fi movie.

08:12

How do you make us believe that this is something we should be addressing right now?

08:18

Well, you have to step back a little bit to look at the trend, the trajectory of where we are going with AI.

08:26

If you had asked me a few years ago, if we would have AI systems that can manipulate language as well as humans,

08:33

I would have said, "Oh, not anytime soon. You know, this is going to be science fiction. You know, maybe when I'm very old."

08:40

But it happened very quickly.

08:44

And we know that we still have a gap to human intelligence.

08:48

There are things that currently AI systems are weak at compared to humans.

08:54

But we don't know how much time it's going to take to get there.

08:57

And companies are investing hundreds of billions of dollars to bridge that gap.

09:04

It's their mission to build AGI - Artificial General Intelligence.

09:10

So, there's a good chance we will have machines that are as smart as us,

09:15

or smarter because you know, human intelligence, there's no reason to think it is the pinnacle of intelligence.

09:21

And what are the consequences? The economy consequences, the positive consequences, the negative consequences?

09:26

What are the risks?

09:27

Because we don't know how to build entities that would be smarter than us and that would do our bidding.

09:37

In the history of this planet, we have never seen a case of a species controlling another species that was smarter.

09:49

Usually, it's the other way around.

09:57

It feels like we're playing a game of catch ups.

09:59

That essentially what's happening is the AI is developing first.

10:02

And then we're saying, "Oh, dear! We need to regulate this type of AI."

10:06

Whether that's deep fakes or other things, you know, other types of AI developments.

10:10

Do you agree with that assessment? I mean, if that's the case, how do we get ahead of this?

10:15

Is there a way of getting ahead of it?

10:17

I think there is.

10:19

And instead of waiting for governments to pass the right laws and regulations and figured out, you know, what is a safe AI, what is not.

10:29

We need to put the burden of the required innovations on the same companies that are building AI.

10:37

So how do we do that?

10:38

We basically tell them, in order to be able to continue working with your AI systems, to develop these systems, to deploy them,

10:46

you're going to need to first register with the government so that governments know that you're doing these things, but also obtain a license.

10:53

And to obtain that license, you need to demonstrate to the government,

10:58

maybe to the international community, to the scientific community, that what you're proposing is going to be safe.

11:04

That, you know, it can't be used by terrorists or, you know, bad governments to launch all kinds of attacks.

11:11

And if governments tell them, well, before you release something, you have to show that it's safe.

11:17

Then they will have a very strong incentive to do it.

11:20

And they have the expertise, and they have the money. So I think they should be doing it.

11:26

But right now, they don't have that incentive, because they're very few or no regulation, depending on the country.

11:33

Let's talk a little bit about what different countries are doing. We'll start with the EU.

11:37

Lawmakers, they have approved the landmark law governing artificial intelligence.

11:41

Can you tell us a bit about this law? What are your thoughts about it?

11:44

So, there is something really good and important in that law.

11:48

It's trying to strike a balance between the benefits of innovation and protecting the public.

11:54

And one of the important ideas in that law is proportionality.

12:00

So, how much oversight should different AI systems, you know, be required to have?

12:06

It depends on the potentially negative impact that they could have.

12:10

So, most of the AI we have today, you know, there isn't a lot of risks.

12:16

And so the regulator has put thresholds,

12:19

so in particular amount of computation needed to train those systems is one example of a threshold like this,

12:27

so that the smaller companies, the startups, which in any case, don't have the capital to train these really large systems.

12:35

Like these systems currently cost like from $100 million to a billion dollars.

12:40

They don't have the money to build these things anyways, and they're gonna get less, you know, burden in terms of administrative duties.

12:51

My final question to you. How do we regulate AI and yet promote its development at the same time?

12:57

And crucially, how do you get global corporations in countries like US, China, Canada, all working together to help, you know, regulate this issue?

13:07

Well, we need to have more global conversations on AI safety on AI regulation.

13:16

Obviously, it's going to be better for all our companies

13:20

if there is sufficient harmonization of different rules about AI across the world.

13:27

Otherwise, it's gonna be very expensive to cater to all the different aspects.

13:32

So that's, that's one thing. The other thing is we need diversity of views in coming up with solutions,

13:41

and for that, we need international collaboration on AI safety.

13:47

There are two sides to this. One is how do we build safer AI?

13:51

So that's a research problem where we can benefit from working together globally to accelerate that process.

13:58

And the other is regulation.

14:00

And you know, how do we agree on on the principles to make sure that we don't leave holes.

14:11

We want to make sure every country at the end of the day forces their companies to follow these safer protocols.

14:19

Professor Bengio, thank you so much for your time and insights.

14:22

My pleasure. Thanks for having me.

14:25

Even as the debate about AI is risks and regulations continues,

14:30

rapid developments in this field hold the potential to change the way we live, work, learn and play.

14:37

Join us next time as I ask Yoshua Bengio about the new frontier of AI and what lies ahead in terms of its impact on society.

14:47

I'm Del Irani, thanks for your company. I'll see you next time on DEEPER LOOK.