The Future Of AI, According To Former Google CEO Eric Schmidt - 雙語字幕
The key thing that's going on now is we're moving very quickly through the capability ladder steps,
and I think there are roughly three things going on now that are going to profoundly change the world very quickly.
And when I say very quickly, the cycle is roughly a new model every year to 18 months.
The first is basically this question of context window.
And for non-technical people, the context window is the prompt that you ask.
So, know, study John F.
Kennedy.
But, in fact, that context window can have a million words in it.
And this year, people are inventing a context window that is infinitely long.
And this is very important because it means that you can take the answer from the system
and feed it in and ask it another question.
So, I want a recipe, let's say I want a recipe to make a drug or something, so they say, what's the first step?
And it says, buy these materials, so then you say, okay, I bought these materials, now what's my next step?
And then it says, buy a mixing pan, and then the next step is, how long do I mix it for?
You see it's a recipe.
That's called chain of thought reasoning,
and it generalizes We should be able,
in five years,
for example,
to be able to produce a thousand step recipes to solve really important problems in science,
in medicine, in science, climate change, that sort of thing.
That's the first one.
The one.
is agents, and an agent can be understood as a large language model that knows something new or has learned something.
So an example would be read all of chemistry,
learn something about chemistry, have a bunch of hypotheses about chemistry, run some tests in a lab about chemistry, and add that to your agent.
These agents are going to be really powerful,
and reasonable to expect that agents will be,
not only will there be a lot of them, and I mean millions, but they'll be like the equivalent of GitHub for agents.
There'll be lots and lots of agents running around and available to you.
And third one, which to me is the most profound, which is already beginning to happen is text-to-action.
And what that is is write me a piece of software to do something, right?
You just say it.
Now can you imagine having programmers that actually do what you say you want and it does
it 24 hours a day and strangely these systems are good at writing codes such as language like Python.
And you've got infinite context window, the ability for agents, and then the ability to do this programming.
Now, this is very interesting.
What then happens?
There's a lot of questions here, and now we get into the questions of science fiction.
I'm sure the three things I've named are happening because that work is happening now.
But at some point, these systems will get powerful enough that you'll be able to take the agents and they'll start to work together, right?
So your agent and my agent and her agent and his agent will all combine to solve a new problem.
At some point people believe that these Asians will develop their own language and that's the point when we don't understand what we're doing,
you know what we should do?
Pull the plug, literally the computer.
it's really a problem when agents start to communicate in ways and doing things that we as humans do not understand.
That's the limit in my view.
Well, there have been many, many predictions.
Clearly, agents and these things will occur in the next few years.
And won't occur in,
like, there won't be one day where everybody says, oh my God, it's more a question of capabilities every month, every six months and so forth.
A expectation is we'll be in this new world within five years, not ten.
And the reason is there's so much money.
And there are also so many ways in which people are trying to accomplish this.
You the big guys,
the three large so-called frontier models,
but you have a very large number of who are programming at one level lower,
at much less or lower cost, who are iterating very quickly.
Plus, you have a great deal of research.
I think there's ever reason to think that some version of what I'm saying will occur within five years.
Well, now, so you say pull the plug.
So two questions.
So how do you pull the plug?
But before you pull the plug,
if you know you're already in chain of thought reasoning and you're headed to what you fear, don't you need to regulate at some point?
doesn't get there, or is that beyond the scope of regulation?
Well a group of us have been working very closely with the governments in the West,
and we started talking to the Chinese, which of course is complicated and takes time, about these issues.
And at the moment the governments, with the exception of Europe, which is always kind of slightly complicated.
I have been doing the right thing, which is they've set up trust and safety institutes.
They're beginning to learn how to measure things and check things.
And right approach is for the governments to watch us and make sure we don't get confused on what the goal is.
right?
So as long as the companies are well-run Western companies with shareholders and lawsuits and all that will be fine.
There's a great deal of concern in these Western companies about liability, doing bad things.
Nobody to hurt people.
They don't wake up in the morning saying let's hurt somebody.
Now of course there's the proliferation problem.
But in terms of the core research, the researchers are trying to be honest.
Okay, so that's the west.
By saying the west you're implying that the liberation outside the west is where the danger is.
The bad guys are out there somewhere.
Well, one of the things that we know.
And it's always useful to remind the techno optimist in my world.
There are evil people and they will use your tools to hurt people.
My favorite example is that the face recognition stuff was invented not to constrain the Uighurs.
They didn't say we're going to invent face recognition in order to constrain the minority in China called the Uighurs.
Right.
But it's all technology is dual use.
All of these inventions can be misused, and it's important for the inventors to be honest with that.
So, in open source, which is, for those of you who don't open-source is where the source code in models, the weights, that is the numbers that have
been calculated are released to the public.
Those go throughout the world, and who do they go to?
They go to China, of course.
They go to Russia.
They go to Iran.
They go to Belarus.
They go to North Korea,
when I was most recently in China,
the essentially all of the work I saw started with open source models from the West, which were then amplified.
So it sure looks to me like these leading firms,
the ones I'm talking about, the ones that are putting a billion, $10 billion, eventually into this, will be tightly regulated.
I worry that the rest will not.
You can see, I'll give you another example.
Look at this problem of misinformation.
I think it's largely unsolvable, and the reason is the code to generate misinformation is essential.
Right?
Any person, right?
A good person, a bad person, has access to them.
It doesn't cost anything and they produce very, very good images.
There are regulatory solutions to that.
But the important point is that that cat is out of the bag or whatever metaphor you want.
It's important that these more powerful systems Especially as they get closer to general intelligence,
have some limits on proliferation, and that problem's not yet solved yet.
Follow up on your point about the funding.
Fakley at Stanford argues that's the biggest problem, is that there's so much money going in the private sector.
And who's their competition to look at what the red lines are or whatever it's the universities which don't have a lot of money.
So, you really trust these companies to be transparent enough to be regulated by government that doesn't know what they're talking about, really.
The correct answer is always trust but verify,
and the truth is you should trust and you and at least in the West the best way to verify is to use private companies that are
set up as verifiers because they can employ the right people and so forth.
So in all of our industry conversations it's pretty clear that the way it will really work is you'll end up with AI checking AI.
It's too hard.
Think about it.
You build a new model, it's been trained on new data, you've worked really hard on it.
How do you know what it knows?
Now you can ask it all the previous questions, but what if it's discovered something completely new and you don't?
And the systems can't regurgitate everything they know.
You have to ask them chunk by chunk by chunk.
So it makes perfect sense that an AI would be the only way to police that.
People are working on that.
With Fei-Fei's argument, she's completely correct.
We have the rich private industry companies, and we have the poor universities who have incredible talent.
It should be a major national priority in all of the western countries to get research funding for the hardware.
a physicist 50 years ago, you had to move to where the cyclotrons were because they were really hard and expensive.
And the way, they still are really hard and expensive.
You need to be neurocyclotron to do your work as a physicist.
We never had that in software.
Our stuff was capital cheap, not capital expensive.
The arrival of heavy duty training in our industry is a huge
And what's happening is that companies are figuring this out and the really rich companies,
I'm thinking of Microsoft and Google as an example,
are planning to spend billions of dollars because they have the cash, they have big businesses, the money's coming in.
That's good.
Where does the innovation come from?
They don't have that kind of hardware and yet they need access to that.
On Kissinger's last trip to China, you went with him, and he had a discussion with Xi Jinping on exactly this set of issues.
Your idea was to set up a high-level group to discuss the potential and catastrophic possibilities of AI.
Where do the Chinese fit in on this?
On the one hand,
I've heard you say,
and not only you,
that we need to go all out to compete with the Chinese,
for some of the reasons you just said,
because there could be bad players or bad intentions, but where is it appropriate to cooperate and why?
Well, first place, the Chinese should be pretty worried about generative AI.
And reason is that they don't have free speech.
And so what do you do when the system generates something that's not permitted in their country?
Who do you jail?
Right.
Right, the computer, the user, the developer, the training data, it's not at all obvious.
And the Chinese regulators so far have been relatively intelligent about this,
but it's that the spread of these things will be highly restricted in China because it fundamentally addresses their information monopoly.
That makes sense.
So our conversation with China, both Dr.
Kissinger I,
when we were together,
and unfortunately he passed away,
in the subsequent meetings have been set up as a result of his inspiration to do them, everyone agrees that there's a problem.
But we're, at the moment with China, we're speaking in generalities.
There is not a proposal in front of either side that's actionable, and that's okay, because it's complicated.
And a lot of this, because of this, because it's actually good to take your time to actually explain what you view as the problem.
So many Western computer scientists are visiting with their Chinese counterparts and trying to say,
if you allow this stuff to proliferate, you could end up with a terrorist act, right?
The of these for biological weapons,
the misuse of these,
The long-term worry is much more existential,
but at the moment, I think the Chinese conversations are largely very constrained by concerns about bio-threats and cyber-threats.
The long-term threat goes something like this.
When I talk about AI, I talk about as human beings.
So, you or I give it, at least in theory, a command, and it may be a very long
command, and may be recursive in the sense, but it starts with a human judgment.
There is something technically called recursive self-improvement.
Where the model actually runs on its own and it just learns and gets smarter and smarter.
When that occurs or when agent to agent interaction that's heterogeneous occurs, we have a very different set of threats.
Which we're not ready to talk to anybody about because we don't understand them.
But they're coming.
Do you see, I guess, I'm trying to think about what a kind of dialogue with the Chinese could mean?
Would be something like nuclear proliferation?
I where, if they understand the existential threat, to start at that level, maybe an IAEA type of thing for proliferation.
Do you think that's...
It's going to be very difficult to get any actual treaties with China.
What I'm engaged with is called a track two dialogue, which means that it's informal.
It's educational, it's interesting, it's very hard to predict by the time we get to real negotiations between the US and China.
what the political situation will be, what the threat situation would be.
A simple requirement would be that if you're going to do training for something that's completely new,
you have to tell the other side that you're doing it so that you don't surprise them.
So it's like the open skies during the Cold War.
So an example would be a no surprise rule.
When a missile is launched, anywhere in the world, all the countries acknowledge that they know it's coming.
That way they don't jump to a conclusion and think it's targeted at them.
That strikes me as a big deal.
Furthermore, that if you're doing powerful training, there needs to be some agreements around safety.
In biology, there's a broadly accepted set of layers, BSL 1 to 4, for biosafety containment, which perfect sense because these things are dangerous.
Eventually, there will be a small number of extremely powerful computers that I want you to think about.
There'll be an army base,
and they'll be powered by some nuclear power source in the army base,
and they'll be surrounded by even more barbed wire and machine guns because their capability for invention,
for power, and so forth exceeds what we want.
as a nation to give either to our own citizens without permission as well as to our competitors.
It makes sense to me that there will be a few of those and there'll be a lot of other systems that are more broadly available.
But you're saying that you would notify the Chinese things that those systems exist.
Again, it's possible that that would be an answer.
And vice versa.
And vice versa.
All of these things are mutual.
But want to avoid a where a runaway agent in China ultimately gets access
to a weapon and launches it foolishly thinking that that's Because remember, these are not human humans, they don't necessarily understand the consequence.
These are all based on a simple principle of predicting the next word.
So we're not talking about high intelligence here,
we're surely not talking about the kind of emotional understanding in history that the humans have and human values.
dealing with a non-human intelligence that does not have the benefit of human experience.
What bounds do you put on it?
And maybe we can come to some agreements on what those are.
Are they moving as exponentially as we are in the West with the billions going into
General SurveyI is trying to have the commensurate billions coming in from government or companies.
It's not at the same level in China, for reasons I don't fully understand.
My estimate, having now reviewed it at some length, is that they're about two years behind.
Two years is not very much, by the way, but they're definitely behind.
There are at least four companies that are attempting to do large-scale model training similar to what I've been talking about,
and they're the obvious big tech companies in China.
They're hobbled because they don't have access to the very best heart.
which is restricted from export by the Trump and now Biden administrations.
Those restrictions are likely to get tougher, not easier.
And as the NVIDIA and their competitor chips go up in value, China will be struggling to stay relevant, right?
Because their stuff won't move at the same.
Chinese have...
Well, the chips are important because they enable this kind of learning.
It's always possible to do it with slower chips, you just need more of them.
And it's effectively a cost tax for Chinese development.
That's the way to think about it.
And is it ultimately dispositive?
Does it mean that China can't get there?
No, but it makes it harder and it means that it takes them longer to do so.
And we should do that as the West?
Well, the West has agreed to do it.
I think it's fine.
It's a fine strategy.
I'm much, about the proliferation of open source.
And the reason, and I'm sure the Chinese would have the same concern.
So again,
these are the kinds of things that we'll be talking to them about is do you understand that these things can be misused against your government
as well as ours.
So the scenario is open source.
basically do something called, basically guardrails, and they fine-tune and they use a technology called RLHF to eliminate some of the bad answers.
There's plenty of evidence that it's relatively easy if I gave you all of the weights,
all of the stuff, so forth, it'd be relatively easy for you to back them out.
and see the raw power of the model, and that's a great concern.
That problem's not been solved yet.
Yeah, reverse engine, and that's not been solved yet.
解鎖更多功能
安裝 Trancy 擴展,可以解鎖更多功能,包括AI字幕、AI單詞釋義、AI語法分析、AI口語等

兼容主流視頻平台
Trancy 不僅提供對 YouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Coursera 等平台的雙語字幕支持,還能實現對普通網頁的 AI 劃詞/劃句翻譯、全文沉浸翻譯等功能,真正的語言學習全能助手。

支持全平臺瀏覽器
Trancy 支持全平臺使用,包括iOS Safari瀏覽器擴展
多種觀影模式
支持劇場、閱讀、混合等多種觀影模式,全方位雙語體驗
多種練習模式
支持句子精聽、口語測評、選擇填空、默寫等多種練習方式
AI 視頻總結
使用 OpenAI 對視頻總結,快速視頻概要,掌握關鍵內容
AI 字幕
只需3-5分鐘,即可生成 YouTube AI 字幕,精準且快速
AI 單詞釋義
輕點字幕中的單詞,即可查詢釋義,並有AI釋義賦能
AI 語法分析
對句子進行語法分析,快速理解句子含義,掌握難點語法
更多網頁功能
Trancy 支持視頻雙語字幕同時,還可提供網頁的單詞翻譯和全文翻譯功能