How to Ensure AI Benefits All Society: Yoshua Bengio / Scientific Director, Mila – Quebec AI Institute

The development of AI systems that can learn and reason like humans is advancing. However, as big tech companies increase their influence, concerns over the concentration of power are growing. How will AI transform our future, and how important is global cooperation with countries like China in ensuring that AI benefits all societies? Yoshua Bengio, founder and scientific director of the Mila – Quebec Artificial Intelligence Institute, offers his insights.

Del Irani
DEEPER LOOK Host

Yoshua Bengio
Scientific Director, Mila – Quebec AI Institute

Transcript

00:12

Hello and welcome to DEEPER LOOK coming to you once again from Montreal, Canada.

00:17

I'm Del Irani, it's great to have your company.

00:20

As AI integrates further into our daily lives, the development of AI systems, that think and learn like humans, are rapidly accelerating.

00:29

However, as big tech companies increase their influence in this area, concerns over the concentration of power are rising.

00:37

So, what groundbreaking discoveries will drive the next era of AI?

00:42

And how important is global cooperation with countries like China, to ensure that AI benefits society as a whole?

00:51

Joining us once again to answer these questions is Yoshua Bengio,

00:55

the founder and scientific director of a leading AI research institute, Mila - Quebec Artificial Intelligence Institute.

01:02

He is often referred to as one of the godfathers of AI, and as a laureate of the 2018 A.M. Turing Award,

01:08

known as the Nobel Prize of Computing, which he received alongside Geoffrey Hinton, and Yann LeCun.

01:14

Welcome back to the program, Professor Bengio, great to have you with us.

01:17

Thanks for having me again.

01:18

You've previously mentioned that the rapid development of language in AI took even you by surprise.

01:24

And you know, you've also said that experts in this field maybe cannot fully understand or explain exactly what's happening in AI.

01:32

If not you, then who? I mean, is there a chance that we may never fully understand how AI works?

01:38

It is possible; one angle that helps understand why it's difficult is that we don't understand what is going on in your brain.

01:49

It has 80 billion neurons.

01:52

And even if we understood the principles by which those neurons do their computation and adapt, so that allows you to learn,

02:02

we may still not understand how they come up with their answers.

02:07

So of course, researchers are trying to study not just human brains,

02:13

but also what's going on in these really large neural nets, artificial neural nets.

02:19

But maybe there is no answer there.

02:22

But how do we trust AI, if we don't quite know how it works?

02:27

Well, that's a really important question.

02:30

I'm trying to look at this question from a mathematical perspective.

02:37

Is there anything we could say, even though we don't understand the details of what is going on,

02:43

that would allow us to prove our high, very high confidence that the AI isn't going to do something bad.

02:53

In many other areas of technology, like think about how we build a bridge or a nuclear plant.

03:00

Engineers and scientists are working out models, that allow us to estimate the probability that something bad will happen.

03:08

And then of course, make sure that that probability is very, very, very small.

03:12

So... this is something to explore.

03:16

In your opinion, how significant are these developments that you're combining AI with robotics now;

03:22

that AI is being taken out of the digital world and being put into a physical being, given, you know, arms and legs.

03:30

I mean, how big of a deal is this?

03:32

Currently robotics advances lag quite a bit compared to the advances we've had with language,

03:42

you know, these large language models and, you know, bots, chatbots, and so on.

03:47

There's an issue with robotics, which is that we don't have the kind of huge datasets of... you know, in a uniform language.

03:58

I mean, human languages are different, but underlining all of them there's, you know, human thought.

04:04

And so, we can... we've been able to gather huge quantities of data to train these large language model neural nets.

04:11

But we don't have the equivalent for robotics, we don't have trillions of data, that are all in the same type.

04:19

But it might change; for example, autonomous vehicle there... you know, there's more and more cars that are recording everything you're doing.

04:29

And that's going to help to train the AI systems that control the car; I mean, a car is like a special kind of robot.

04:36

So things could change quickly.

04:39

It's hard to say, but the implication of having AI systems, that can control many robots in the world, are that

04:47

the mistakes that could be made and the potential for loss of control could have much more concrete and dangerous consequences.

05:00

So it's not just something in computers, you know, maybe turning off our communication infrastructure;

05:07

it's, you know, something that could threaten us physically, if it's not behaving properly.

05:13

At the same time, it kind of goes back to that risks we were talking about earlier,

05:17

the out of control, where they could learn more and become smarter than us.

05:21

Yeah and maybe to, like, increase the scare level,

05:23

even if there were rogue AI systems, say a year from now, before robotics really achieves its goals;

05:32

it couldn't completely get rid of humans, because it would still need humans to provide the energy, the electricity, the parts for the computers.

05:40

But if it has robots that can do everything that humans can do, then, you know, the AI will not need humans anymore.

05:46

And that's a bad place to be.

05:48

That is a bad thing! So...

05:50

let's talk about the fact that big tech is leading the way when it comes to these developments in these technologies.

05:57

What are some of the benefits and downsides of that?

06:01

Well, one benefit is that we now know that with a lot of money, a lot of computational resources,

06:08

we can build systems that are much smarter than we otherwise thought was possible,

06:15

using the same old algorithms, basically we had a few years ago,

06:18

but just scale up the computation and the amount of data, which is engineering and money.

06:25

We're getting these systems that are really very strong in many ways.

06:31

So, I think if it had been just academia, we never would have had the money to do these experiments.

06:37

But of course, there are also concerns, there are very few companies...

06:41

I mean, because it's so expensive, very few companies can afford really building those larger systems.

06:46

So, there are really issues of, as these systems become more powerful,

06:52

who decides what they're going to be doing, for what purposes, for, you know, for whose interests?

06:57

So, given what you said, the fact that the bigger companies, the ones with the money are the ones that are going to drive this agenda;

07:05

they're deciding what's developed, and if they do develop the powerful stuff, essentially, they'll be in control of it.

07:11

How do we prevent the concentration of power in hands of the few?

07:14

And what impact could it have, if we're not able to do that?

07:19

So, what I've been thinking about is what political scientists call "democratic governance."

07:25

So where we should go, if there is no choice, but having a concentration in a few organizations,

07:33

is that the decisions taken by these organizations are not just motivated by the competition between companies and maximizing profit,

07:42

but actually driven mostly by the common public interests.

07:48

So, think about boards of these organizations having civil society,

07:54

you know, of course, regulator, independent experts, even representatives of the international community, that are able to say:

08:05

"No, we don't do this.

08:06

We're going to try to make sure the benefits of AI are shared across the planet, across everyone.

08:13

We want to make sure that AI that we are developing doesn't fall in bad hands,

08:18

we want to make sure that AI is used in ways that are beneficial for society."

08:23

It's not always the case that what is profitable is beneficial for society; and so we need to arbitrate that somehow.

08:36

You were recently in Beijing, in March of this year.

08:39

And you were involved in the international dialogue on AI safety, with your Chinese counterparts.

08:43

How did they receive you? How open were they to conversations about AI safety?

08:49

So, they received me very well.

08:51

We were in this beautiful ancient imperial palace, the Aman Summer Palace.

08:59

And their conversations were incredibly frank, and warm, and collaborative.

09:09

Because if we mess up with AI, with misuse or the loss of control, it's a global problem.

09:18

It's... everyone would suffer, if we don't find a way to coordinate to minimize those risks.

09:24

So it's like we're all in the same boat here.

09:28

It's a little bit like what happened after the Second World War,

09:31

when the Russians and the Americans understood that, well... Armageddon would be bad for everyone.

09:39

It doesn't matter what political system you believe in.

09:41

If we all die, or you know, if civilization crumbles... no one wants that.

09:46

Yeah? So that... I think that is the, actually, silver lining in some ways.

09:52

That those risks, if the governments understand them, will force us to work together to find solutions.

10:03

And there's no solution, except together.

10:06

Do you... I mean, a couple of things:

10:08

first of all, how big a player is China, when it comes to AI development?

10:13

And how important is it to get their cooperation on safety and regulations?

10:18

Well, it's obviously the number two player.

10:20

In my scientific field of machine learning, I look at how many papers come from China,

10:27

or you know how many Chinese students are working on these things.

10:31

So, China has a huge footprint in the AI research.

10:36

So, clearly we can't find a solution to the AI risks without a discussion involving China.

10:44

But of course, it's going to have to be more than that.

10:47

It's going to have to be multilateral for many good reasons.

10:51

And it's, it's going to be difficult because, if you think it's similar to the... like nuclear proliferation conversation,

10:59

every country is going to have to let go of some sovereignty, so that we are all safer.

11:06

It's the same for climate change negotiation.

11:08

And for that, it is really important to increase the understanding and the awareness,

11:14

and that trip to China, I had the impression, really helped to clarify the risks.

11:20

And we focused on what we call "red lines," so things that we really, really don't want any AI to cross.

11:30

That governments should make sure their regulation is going to require companies to, you know, have AIs that do not cross those "red lines."

11:42

If we strike that right balance between the development of AI and cooperation from players,

11:48

what can be achieved in terms of the applications, particularly in the field of health and science? What's possible?

11:55

So, first of all, right now, health and science isn't where most of the AI investment is going.

12:01

Most of the investment is going into what companies call "productivity."

12:06

In other words, replacing human jobs by machines doing it, you know, cheaper and faster, and potentially even better.

12:12

And, you know, that has economic value.

12:15

But I think what would have much greater value for society and our well-being,

12:20

are advances in science, driven by, with the help of AI, and this is something that has started in the last few years.

12:31

Drug discovery is something that is becoming fueled by AI;

12:34

all the major pharmaceutical companies are investing in AI, and we've seen the first few drugs that have been discovered by AI.

12:42

So the impact on health is something that could really transform our societies.

12:48

Another example that, you know, many people care about is the environment.

12:53

We don't understand enough the process of climate change, the calculations that are needed are very expensive.

12:59

And we're seeing now more and more AI systems that help to accelerate those calculations,

13:05

even starting to potentially replace like weather forecasts using AI.

13:09

So that you can get finer and, you know, better predictions that help...

13:15

to help, for example, with solar power plants, renewables in general.

13:22

So there's so many ways in which science could be accelerated using AI.

13:28

And I think in the longer term, it would have much more of a beneficial effect on society,

13:34

than the shorter term... oh, let's get rid of people working in jobs and, you know, make it cheaper.

13:41

Which could also have value, but we have to be very careful here.

13:45

Okay, how optimistic are you that AI will be used for good and not evil?

13:52

I always answer that question in the same way:

13:55

It all depends on us.

13:58

There are many possible futures.

14:00

That's why we're having this discussion.

14:03

And the real question isn't, is it going to be good, or is it going to be bad? Am I optimistic or pessimistic?

14:08

The real question is, what can each of us do?

14:11

Even a little bit...

14:14

And by the way, if more people cared about this question, governments would move a lot faster, and we would all win.

14:22

Professor Yoshua Bengio, so great to talk to you! Thank you again for your time.

14:26

Oh, thank you for all the great questions.

14:29

Whether we like it or not, AI has already begun to permeate many aspects of our daily lives.

14:36

When it comes to safeguarding our future against this powerful technology,

14:40

we all have a role to play by learning not just about its risks, but its endless possibilities.

14:47

I'm Del Irani. Thanks for your company.

14:49

I'll see you next time on DEEPER LOOK.