A lot of people are really scared about AI.
很多人都很害怕人工智能
I think they're scared about the wrong things.
我觉得他们完全是杞人忧天
Most people who talk about AI risk are worried About AI taking over the universe.
多数认为人工智能有风险的人都是觉得它会掌管世界
There's an example from Nick Bostrom that sticks in a lot of people's heads.
尼克·博斯特罗姆曾举过的一个例子深入人心
This is an AI system that is supposed to be rewarded for making paper clips.
有一个人工智能系统 只要它做出了回形针 就会得到奖励
And this is all well and good for a little while, and then it runs out of the metals that it needs.
这在短时间内没有任何问题 但有一天它会耗尽它需要的所有金属
Eventually it starts turning people into paperclips because there's a little bit of metal in people and there's no more metal to get.
最终它会把人类变成回形针 因为人体内含有少量金属
So this is our kind of sorcerer's apprentice terror that I think a lot of people are living in.
很多人应该都生活在这种魔法师的学徒的恐惧中
I don't think it's a realistic terror for a couple of reasons.
我觉得这种恐惧并不现实 原因如下
First of all, it's certainly not realistic right now.
首先 目前这种恐惧一定是不现实的
We don't have machines that are resourceful enough to know how to make paperclips unless you carefully show them every detail of the process.
我们还没有聪明到会做回形针的机器 除非你非常认真地给它们演示每个细节
They're not innovators right now.
现在的机器还不是发明家
So this is a long way away, if ever.
即便能实现这一点 也还有很长的路要走
It also assumes that the machines that could do this are so dumb that they don't understand anything else.
这种恐惧还认为会做回形针的机器很笨 它们不理解任何其他东西
But that doesn't actually make sense.
但这种想法是没有道理的
Like, if you were smart enough not only to want to collect metal from human beings but to chase the human beings down,
如果你聪明到不仅知道从人体内收集金属 还知道追杀人类
then you actually have a lot of common sense, a lot of understanding of the world.
那你应该也有很多常识 很了解这个世界
If you had some common sense and a basic law that says don't do harm to humans, which Asimov thought of in the '40s,
如果你拥有常识 并且掌握了不伤害人类的基本法 阿西莫夫曾在上世纪40年代这么说过
then I think that you could actually preclude these kinds of things.
那我觉得你就有能力避免这种情况
So, we need a little bit of legislation.
所以 我们需要一些立法
We need a lot of common sense in the machines and some basic values in the machines.
我们的机器需要掌握很多常识和一些基本的价值观
But once we do that, I think we'll be OK.
一旦我们实现了这些 一切都会安然无恙的
And I don't think we're going to get to machines that are so resourceful that they could even contemplate these kinds of scenarios until we have all that stuff built in.
我觉得在我们把这些内容植入进机器里之前 不会有机器聪明到可以自己思考这些现象
So I don't think that's really going to happen.
所以我觉得不会发生那种事情
And the other side of this is machines have never shown any interest in doing anything like that.
另一方面是 机器从来没有展示出想要伤害人类的兴趣
You think about the game of Go, that's a game of taking territory.
想想Go这个游戏 它是关于占领领土的
In 1970 no machine could play go at all.
1970年的时候 没有机器会玩Go
Now machines can play go better than the best human.
现在的机器玩Go玩的比人还好
So they're really good at taking territory on the board.
所以它们非常擅长占领领土
And in that time the increase in their desire to take actual territory on the actual planet is zero.
那时候 它们完全没有想在真实的地球上占领土地的欲望
That hasn't changed at all. They're just not interested in us.
这一点从未改变 它们对人类不感兴趣
And so I think these things are just science fiction fantasies.
所以我觉得这些东西只是科幻小说里的幻想
On the other hand, I think there's something to be worried about, which is that current AI is lousy.
另一方面 我觉得有些事情是需要我们担心的 即现在的人工智能很讨厌
And thinking about people in the White House, the issue is not how bright somebody is, It's how much power they have.
想想白宫的工作人员 问题不在于他们有多聪明 而是他们有多大的权力
So you could be extremely bright and use your power wisely, or not so bright but have a lot of power and not use it wisely.
所以你可能很聪明 能够合理利用权力 你也可能有很大的权力 但却昏聩无能
Right now we have a lot of AI that's increasingly playing an important role in our lives,
现在 有很多人工智能在我们生活中有着越来越重要的作用
but it's not necessarily doing the careful multi-step reasoning that we want it to do.
但它并不一定像我们所期待的那样做着精细的多步骤推理
That's a problem.
这是个问题
So it means, for example, that the systems we have now are very subject to bias.
也就是说 比如 我们现在的系统很容易有偏见
You just, statistics in, and you're not careful about the statistics, you get all kinds of garbage.
如果你在输入数据的时候不够谨慎 那你可能得到一堆垃圾信息
You do Google searches for, like, "grandmother and child," and you get mostly examples of white people,
如果你在谷歌上搜索“奶奶和孩子” 结果多数都是白人例子
because there's no system there monitoring the searches trying to make things representative of the world's population.
因为没有监管系统 让搜索结果分别代表着世界上不同的人种
They're just taking what we call a "convenient sample."
他们采用的只是我们所谓的“方便样本”
And turns out there's more labeled pictures with grandmother and grandchild among white people, because more white people use the software or something like that.
事实表明白人中有更多带有奶奶和孩子标签的照片 因为有更多白人使用电脑之类的东西
I'm slightly making up the example, but I think you'll find examples like that.
这些例子是我编的 但我相信你们会找到相似的例子
These systems have no awareness of the general properties of the world.
这些系统对世界的一般属性毫无意识
They just use statistics.
它们只会使用统计
And yet they're in a position, for example, to do job interviews.
但它们却依然能够用来面试
Amazon tried this for like four years and finally gave up and decided they couldn't do it well.
亚马逊尝试了四年 最终放弃了 结论是他们做不好
But people are more and more saying, well, let's get the data.
但越来越多的人会说 我们用数据
Let's get deep learning. Let's get machine learning.
深度学习 让机器学习
And we'll have it solve all our problems.
让它解决我们所有的问题
Well, systems we have now are not sophisticated enough to do that.
我们现在所拥有的系统还做不到这一点
And so trusting a system that's basically a glorified calculator to make decisions about who should go to jail,
所以相信一个基本上就是高级计算器的系统 让它决定谁去坐牢
or who should get a job, things like that, those are, at best risky and probably foolish.
或者谁应该得到工作等等 说好听点是在冒险 说难听点就是愚蠢