Securing the Future with AI: Eric Schmidt / Chair of the Special Competitive Studies Project, Former CEO and Chairman of Google

While advances in AI have created many societal benefits, some experts warn that AI's unchecked power could threaten human existence. And with the costs of AI innovation being so high, there are concerns that control of this technology will be in the hands of a few. What guardrails do we need to secure the future of AI, and how will it shape geopolitical competition? Former CEO and Chairman of Google, Eric Schmidt, weighs in on the discussion.

Del Irani
DEEPER LOOK Host

Del Irani (left), Eric Schmidt (right)

Transcript

00:12

Hello and welcome to DEEPER LOOK.

00:14

I'm Del Irani, it's great to have your company.

00:17

In March this year, a German artist won a prestigious international photography competition for this photograph -

00:25

but he rejected the award - because this photo had been generated using Artificial Intelligence.

00:32

The photo has sparked debate about AI's place in the art world...

00:36

But also raised other questions about AI's ability to transform our world.

00:42

So, how do we develop guardrails for this fast-moving technology

00:46

and create a sensible balance between regulation and innovation?

00:52

Joining me once again, to talk more about this is Eric Schmidt, the former CEO and chairman of Google.

00:57

Dr. Schmidt, welcome back to the program.

00:59

Great to have you with us.

01:00

Thank you very much.

01:02

So, Dr. Schmidt, there are genuine concerns that the development of AI could eventually outsmart us,

01:09

it could outwit us, now, these concerns have been put forward by Geoffrey Hinton,

01:13

a former Google executive, known as the "godfather of AI."

01:17

How concerned should we be about these, you know, about these comments?

01:22

Well, Professor Hinton was acquired by Google and his company years ago,

01:28

and is a truly brilliant and historic figure in the field,

01:31

because he did a lot of the original math that was used to make these systems happen.

01:38

The reason I disagree with him is that he omitted the incredible benefits of his discovery,

01:45

including the ability to make literally everybody on the planet, better educated, healthier, smarter.

01:52

There's every reason to think that the platforms that he helped create

01:56

will accelerate technological progress, which I view is a generally good thing.

02:01

Of course, there are issues, there always are.

02:04

But that's not a reason not to do it.

02:08

I mean, he said that AI could pose a threat to human existence.

02:12

That's a pretty big statement.

02:14

The scenario that he's talking about are called extinction events.

02:18

And last year, there was a study of computer scientists who are not necessarily good at this,

02:22

but they asked, and a majority of the computer scientists, more than 50%,

02:27

said that there was a 10%, or greater chance of an extinction event.

02:31

In that scenario, you can have what is called recursive self-improvement,

02:36

where it gets smarter and smarter and smarter.

02:39

And because it could evolve, this is could, faster than humans,

02:43

you have scenarios that sure look like the Overlord scenario.

02:48

But I'm showing that we didn't give autonomy to these systems.

02:53

As they become more capable of autonomous thinking, we're going to put guardrails on them.

02:59

We're going to say, go here, and don't go there.

03:04

Now, you could imagine in a future movie, that the system discovered how to get rid of its own guardrails,

03:09

and then it went off and, you know, did something, right?

03:12

That's a great movie.

03:13

But we're not that stupid as a society.

03:17

We're not trying to harm other people.

03:19

And so, one way to say this is, there is always a danger of someone evil like Osama bin Laden,

03:27

taking this without guardrails and doing real damage.

03:31

That's what we have to worry about.

03:34

The rest of it, this race to intelligence is so enormously powerful

03:40

that I support it, and I want it to go even faster.

03:44

Let's start by talking about the US,

03:45

the White House has announced a series of measures in terms of what they're planning with AI regulation.

03:53

What do you make of these measures?

03:55

So, the White House is getting geared up.

03:59

And, I and others have told them that it's really important to have this done

04:05

in conjunction with the scientists who understand what the technology can do.

04:09

Many people arrive at this with a preconditions or guesses of things which it can't do,

04:14

but it can, or things that it can do.

04:16

They don't really understand.

04:17

But the scientists can tell you, this is where we think the direction is going,

04:21

these are the things to worry about.

04:23

I can give you an example.

04:25

I've been named to a bioterror taskforce because I'm worried

04:28

that these technologies will be used to create bad pathogens.

04:32

Now, it's more than just the computer, you actually have to synthesize the pathogen,

04:36

it actually has to be effective, but that's a really bad outcome.

04:39

So that's a good example where the government is getting ahead of the potential threat.

04:44

And I think that's the way to do it.

04:45

If you look at Europe, just to make it clear,

04:48

Europe is trying to regulate a technology which they don't understand,

04:51

and they don't have companies that are doing it.

04:54

But that's a guarantee of slowing everything down and not helping.

04:59

In China, they have a rule that's quite broad that says that anything

05:04

that is seditious is not allowed, but they don't define seditious.

05:08

And because in their system, which is obviously not a democracy,

05:12

it's sort of whatever they decide is evil, is evil.

05:15

That's not good either.

05:16

So, the ideal scenario, and I hope Japan joins America in this collectively,

05:23

along with the other leading Democratic technology countries;

05:27

I hope that we can build a consensus which will allow this technology to advance very quickly, while people are watching.

05:34

There are proposals, I'll give you some examples that are have not been approved in any way.

05:39

We should know who's on the platforms, we should know who's doing the training,

05:44

we should agree on some reasonably obvious, shared safety thing.

05:51

Do you think that the cost of developing AI, it's quite expensive right now,

05:56

do you think that it's going to mean that the power is going to be concentrated in the hands of the few and,

06:02

again, is there regulation that you see that could perhaps help overcome this power concentration?

06:09

My instinct a year ago, was that the winners would be the big companies,

06:14

because the training costs were so large.

06:17

I was quite sure I was right.

06:19

Today, small companies are taking the knowledge that's been trained by the big companies,

06:26

and putting them in models, which cost 10,000 times less, 100,000 times less,

06:33

and getting to 90% of the effectiveness of the big ones.

06:37

So, you tell me, who's going to be the winner?

06:39

Is it going to be that last 10% that cost 10,000 times more money?

06:44

Or will it be the "good enough" that costs so much less? We honestly don't know.

06:51

So, if somebody comes up to you, and I encounter this all the time,

06:54

people saying, "Do you know that Microsoft and Google and all these other companies

06:58

are just going to get bigger and stronger?" I'm not sure.

07:02

I think they're going to do well.

07:04

But I also think that this diffusion of knowledge,

07:09

from these large groups to these smaller groups is happening so fast,

07:15

that it's going to create huge new players.

07:18

And who knows, maybe they'll top all the big companies as the big companies toppled their predecessors.

07:24

That's why it's so interesting.

07:25

When these waves come along, it completely reorganizes the landscape.

07:31

It really does!

07:40

How will technological innovation in AI,

07:45

define the future of geopolitical competition?

07:48

So, you know, the US is currently ahead. Could China catch up?

07:52

I mean, how do you see it affecting the global balance that we have right now?

07:56

So, the sanctions against China,

07:59

especially with respect to the high-performance chips, have really slowed them down.

08:04

That was the intent of the Trump and Biden policies.

08:08

How long will it take for China to catch back up?

08:11

I estimate at least a few years, but maybe much longer.

08:15

Because they're behind, the front runner gets better.

08:20

If the front runner stops innovating, then they can catch up.

08:24

But if the US companies, with their partners in Japan, Taiwan, South Korea, and the Netherlands,

08:30

keep moving forward, that gap may persist.

08:34

I think that's a reasonable read of this.

08:36

Because of the horrific war that Russia started in Ukraine,

08:41

it's hard to see how Russia is going to be a significant player,

08:45

especially because of the sanctions which cover all of these technologies.

08:48

So, the game is likely to be the democratic T10, T12 companies...

08:54

countries as they're called, which includes Japan.

08:58

It's clear to me that it will include India, in some form.

09:02

India is, in my view, coming but doesn't realize it's come yet.

09:06

It's on its way.

09:08

But it doesn't understand that what's ahead of it is so interesting and so powerful.

09:14

But India has the scale and the money and the languages and the history to do this, if it chooses to.

09:21

The countries that innovate ahead of everyone else are the natural winners.

09:27

US, some European countries, Japan, Taiwan, South Korea, no question, maybe Australia.

09:34

The losers are probably the countries that don't have a workforce and an educational system

09:42

that is capable of educating enough people to participate and use this technology.

09:49

So, there are, I'll give you an example.

09:52

Africa is incredibly important to humanity.

09:54

It's an incredibly important continent.

09:56

It lacks the universities, the tradition and so forth, for whatever reason.

10:02

And that means it will be hard for them to keep up.

10:05

Unless they rapidly invest in the tools and technologies to do this.

10:11

There's plenty of talent.

10:12

It's not a talent problem.

10:14

It's a focus problem.

10:16

To what extent are you concerned, or is it a legitimate real concern

10:21

that these chat bots and generative AI could be used for spreading disinformation?

10:26

I'm extremely worried about this.

10:29

In 2016, we saw Russian interference in the US elections.

10:33

In other countries, we've seen Russian interference in their democratic processes.

10:37

Obviously, Russia is not in favor of democracy and is trying to make it less workable.

10:42

The countries that are democracies are going to have

10:45

to confront that there are evil actors who are making false information,

10:51

and flooding the zone, they're flooding the market.

10:55

Each country will come up with ways to regulate that.

10:58

Sometimes the regulation will work.

11:00

I have a concern. And I'll just say it very bluntly.

11:03

One of things that I learned when running YouTube way back when,

11:06

was that when you see a video that you're told is false and you watch it, it still affects you.

11:18

What you have seen cannot be unseen.

11:22

From birth, you were taught to believe what your ears and your eyes told you.

11:29

Well, now I'm telling you don't believe it.

11:31

That's a hard thing for adults, that's very hard.

11:35

And we're going to struggle with this, I think.

11:37

It's very important, now, before this gets too bad,

11:42

not to warn people about it, but to figure out how to address it.

11:45

I'll give you an example.

11:46

I was a very good friend of Steve Jobs.

11:48

And there was an audio recording with Steve Jobs's voice with a person

11:54

named Joe Rogan here in America in contemporaneous time, 10 years after his death.

12:01

And I was listening to Steve talk about things today and it just put a shiver in my spine.

12:08

It was just one of those emotional reactions.

12:12

Now I'm not suggesting banning them, but we're going to have to label them.

12:16

We're going to have to know where the information came from.

12:19

And that will help us understand how to deal with the worst effects.

12:23

What I learned about misinformation when I was at Google,

12:26

was that there are plenty of groups that have lots of time,

12:30

and lots of money to spend to misinform you.

12:33

Your opponents, the Russians, for example, had lots of time lots of money to misinform us.

12:38

But so do corporations.

12:40

Corporations that are trying to sell something, they might misinform you, your political opponent.

12:45

And then there are people who are just as I said, nihilists.

12:49

They're actually just, they're just having fun harming people.

12:53

You've written about the vast unknown powers and capabilities of AI.

12:58

Can you explain that why you think we just really don't know how powerful this technology is,

13:05

and what this could mean for the future?

13:07

I think we need to be humble about everything we say about this technology.

13:13

This is a new technology that's just arrived at scale in the last year,

13:18

it appears to have human-like capabilities, but it's not human.

13:23

It's not conscious.

13:25

It appears to have polymathic capabilities of integrating knowledge across industries.

13:30

We don't know how that scales, we don't know where that ends.

13:34

Eventually, we will exhaust what it can do.

13:37

But we're not anywhere near that.

13:39

A good point is that today, we're seeing scientific discoveries where the system invents it,

13:46

and then we figured out what it invented.

13:49

That's not how science is supposed to work.

13:51

We're supposed to do it step by step.

13:54

These are game changers.

13:56

They are accelerators of human progress.

13:59

But we don't fully understand the arc of it.

14:03

We don't know when it begins to go stale.

14:06

We don't know when it begins to be less relevant.

14:09

We just don't know.

14:10

It's an incredibly exciting time to be doing this.

14:14

And we're just at the beginning of it.

14:18

Thank you so much for joining us on the program.

14:21

No, thank you for spending so much time on this, and I look forward to seeing you soon.

14:25

There are still real differences in how nations are approaching AI regulation.

14:30

But one thing is clear:

14:32

the decisions made by governments and other influential actors in the coming years

14:37

will be crucial in determining how generative AI is used.

14:41

The hope is that we can harness the full potential of this powerful technology while minimizing its threats.

14:49

I'm Del Irani, thanks for your company.

14:51

I'll see you next time.