手机APP下载

您现在的位置: 首页 > 英语听力 > 品牌英语听力 > 六分钟英语 > 正文

第511期: 应该害怕聊天机器人吗? Should we fear chatbots?

来源:可可英语 编辑:Leon   可可英语APP下载 |  可可官方微信:ikekenet
  


扫描二维码进行跟读打分训练

Hello. This is 6 Minute English from BBC Learning English. I’m Neil. And I’m Rob.

你好。这里是BBC英语六分钟。我是尼尔。我是罗伯。

Now, I’m sure most of us have interacted with a chatbot.

现在,我相信我们大多数人都和聊天机器人有过互动。

These are bits of computer technology that respond to text with text or respond to your voice.

这是一种计算机技术,可以用文本回应文本或回应你的声音。

You ask it a question and it usually comes up with an answer!

你问它一个问题,它通常会给出一个答案!

Yes, it’s almost like talking to another human, but of course it’s not – it’s just a clever piece of technology.

是的,这几乎就像和另一个人说话,但当然不是——这只是一项智能的技术。

It is becoming more sophisticated – more advanced and complex, but could they replace real human interaction altogether?

它正变得越来越精密——越来越先进和复杂,但它们能完全取代真正的人类互动吗?

We’ll discuss that more in a moment and find out if chatbots really think for themselves.

我们稍后会详细讨论这个问题,看看聊天机器人是否真的能独立思考。

But first I have a question for you, Rob.

但首先我有个问题要问你,罗伯。

The first computer program that allowed some kind of plausible conversation between humans and machines was invented in 1966, but what was it called?

第一个允许人与机器之间进行某种貌似合理的对话的计算机程序是在1966年发明的,但它叫什么?

Was it: a) ALEXA b) ELIZA c) PARRY.

是a) 亚莉克莎 b) 伊丽莎 c) 帕里。

It’s not Alexa – that’s too new – so I’ll guess c) PARRY.

不是亚莉克莎,太新了,所以我猜是c) 帕里。

I’ll reveal the answer at the end of the programme.

我将在节目的最后揭晓答案。

Now, the old chatbots of the 1960s and 70s were quite basic, but more recently, the technology is able to predict the next word that is likely to be used in a sentence, and it learns words and sentence structures.

好了,20世纪60年代和70年代的旧聊天机器人非常基础,但最近,这项技术能够预测句子中可能使用的下一个单词,它可以学习单词和句子结构。

It’s clever stuff.

这是很智能的东西。

I’ve experienced using them when talking to my bank - or when I have problems trying to book a ticket on a website.

当我和银行通话时,或者当我在网站上订票遇到问题时,我就使用过它们。

I no longer phone a human but I speak to a ‘virtual assistant’ instead.

我不再给人打电话,而是和一个“虚拟助理”说话。

Probably the most well-known chatbot at the moment is ChatGTP.

目前最有名的聊天机器人可能是ChatGTP。

It is. The claim is it’s able to answer anything you ask it.

是的。它能回答你的任何问题。

This includes writing students’ essays.

包括代写学生的论文。

This is something that was discussed on the BBC Radio 4 programme, Word of Mouth.

这是在BBC广播4频道的《口碑》节目中讨论过的事情。

Emily M Bender, Professor of Computational Linguistics at the University of Washington, explained why it’s dangerous to always trust what a chatbot is telling us…

华盛顿大学计算机语言学教授艾米丽·本德解释了为什么总相信聊天机器人告诉我们的东西是危险的……

We tend to react to grammatical fluent coherent seeming text as authoritative and reliable and valuable - and we need to be on guard against that, because what's coming out of ChatGTP is none of that.

我们倾向于把语法流畅连贯的文本看作是权威的、可靠的、有价值的——我们需要警惕这种情况,因为从ChatGTP中得到的文本并不是这样。

So, Professor Bender says that well written text that is coherent – that means it’s clear, carefully considered and sensible – makes us think what we are reading is reliable and authoritative.

所以,本德教授说,写得好的连贯文本——意味着它是清晰的,仔细考虑过和明智的——让我们认为正在阅读的东西是可靠权威的。

So it is respected, accurate and important sounding.

因此,它是受人尊敬的、准确的和重要的。

Yes, chatbots might appear to write in this way, but really, they are just predicting one word after another, based on what they have learnt.

是的,聊天机器人可能看起来是这样,但实际上,它们只是基于学到的知识,一个接一个地预测单词。

We should, therefore, be on guard – be careful and alert about the accuracy of what we are being told.

因此,我们应该保持警惕——对我们被告知的信息准确性保持谨慎和警惕。

One concern is that chatbots – a form of artificial intelligence – work a bit like a human brain in the way it can learn and process information.

有一个担忧是这样的,聊天机器人——人工智能的一种形式——在学习和处理信息的方式上有点像人类大脑。

They are able to learn from experience - something called deep learning.

他们能够从经验中学习——这就是所谓的深度学习。

A cognitive psychologist and computer scientist called Geoffrey Hinton, recently said he feared that chatbots could soon overtake the level of information that a human brain holds.

认知心理学家和计算机科学家杰弗里·辛顿最近表示,他担心聊天机器人可能很快就会超过人类大脑的信息量。

That’s a bit scary isn’t it?

这有点可怕,不是吗?

For now, chatbots can be useful for practical information, but sometimes we start to believe they are human, and we interact with them in a human-like way.

目前,聊天机器人可以提供实用的信息,但有时我们开始相信它们是人类,于是以一种类似人与人的方式与它们互动。

This can make us believe them even more.

这可以让我们更加相信它们。

Professor Emma Bender, speaking on the BBC’s Word of Mouth programme, explains why we meet feel like that…

艾玛·本德教授在BBC的“口碑”节目中解释了为什么我们会有这样的感觉……

I think what's going on there is the kinds of answers you get depend on the questions you put in, because it's doing likely next word, likely next word, and so if as the human interacting with the machine you start asking it questions about

我认为你得到的答案取决于你输入的问题,因为它回答的下一个单词似乎合适,下一个单词似乎也合适,所以如果人类与机器互动时开始问它这些问题

‘how do you feel, you know, Chatbot?’

“你感觉怎么样,聊天机器人?”

‘What do you think of this?’

“你觉得这个怎么样?”

And‘what are your goals?’

和“你的目标是什么? ”

You can provoke it to say things that sound like what a sentient entity would say...

你可以刺激它说一些听起来像有知觉的实体会说的话……

We are really primed to imagine a mind behind language whenever we encounter language.

每当我们遇到语言时,我们就会开始想象语言背后的思想。

And so, we really have to account for that when we're making decisions about these.

所以,我们在做决定的时候必须考虑到这一点。

So, although a chatbot might sound human, we really just ask it things to get a reaction – we provoke it – and it answers only with words it’s learned to use before, not because it has come up with a clever answer.

所以,尽管聊天机器人可能听起来像人类,但我们向它提问真的只是为了得到一个反应——激怒它——它只会用以前学过的词来回答,而不是因为它想出了一个巧妙的答案。

But it does sound like a sentient entity – sentient describes a living thing that experiences feelings.

但它听起来确实像一个有知觉的实体——有知觉是用来形容有情感的生物。

As Professor Bender says, we imagine that when something speaks there is a mind behind it.

正如本德教授所说,我们以为说话的东西背后有思想。

But sorry, Neil, they are not your friend, they are just machines!

但不好意思,尼尔,它们不是你的朋友,它们只是机器!

It’s strange then that we sometimes give chatbots names.

奇怪的是,我们有时会给聊天机器人起名字。

Alexa, Siri… and earlier I asked you what the name was for the first ever chatbot.

亚莉克莎,Siri……还有之前我问你的第一个聊天机器人的名字。

And I guessed it was PARRY. Was I right?

我猜是帕里。我说的对吗?

You guessed wrong, I’m afraid.

恐怕你猜错了。

PARRY was an early form of chatbot from 1972, but the correct answer was ELIZA.

帕里是1972年的早期聊天机器人,但正确的答案是伊丽莎。

It was considered to be the first ‘chatterbot’ – as it was called then, and was developed by Joseph Weizenbaum at Massachusetts Institute of Technology.

它被认为是第一个“聊天机器人”,由麻省理工学院的约瑟夫·魏岑鲍姆开发。

Fascinating stuff.

迷人的东西。

OK, now let’s recap some of the vocabulary we highlighted in this programme.

好了,现在让我们回顾一下今天节目中强调的一些词汇。

Starting with sophisticated which can describe technology that is advanced and complex.

从精密的开始,它可以描述先进和复杂的技术。

Something that is coherent is clear, carefully considered and sensible.

有条理的意思是是清晰的,经过深思熟虑的和明智的。

Authoritative – so it is respected, accurate and important sounding.

权威的——所以它的意思是受人尊敬的,准确的,重要的。

When you are on guard you must be careful and alert about something – it could be accuracy of what you see or hear, or just being aware of the dangers around you.

当你处于警戒状态时,你必须对某些事情保持谨慎和警惕——它可能是你所看到或听到的东西的准确性,也可能只是意识到你周围的危险。

To provoke means to do something that causes a reaction from someone.

挑衅是指为了引起某人的反应而做的某件事。

Sentient describes something that experiences feelings – so it’s something that is living.

有知觉的描述的是有感觉的东西,所以它是有生命的东西。

Once again, our six minutes are up. Goodbye. Bye for now.

六分钟又到了。再见。再见了。

重点单词   查看全部解释    
overtake [.əuvə'teik]

想一想再看

v. 赶上,突然来袭,压倒

 
artificial [.ɑ:ti'fiʃəl]

想一想再看

adj. 人造的,虚伪的,武断的

联想记忆
accuracy ['ækjurəsi]

想一想再看

n. 准确(性), 精确度

联想记忆
fluent ['flu:ənt]

想一想再看

adj. 流利的,流畅的

联想记忆
experienced [iks'piəriənst]

想一想再看

adj. 有经验的

 
minutes ['minits]

想一想再看

n. 会议记录,(复数)分钟

 
entity ['entiti]

想一想再看

n. 存在,实体

 
alert [ə'lə:t]

想一想再看

adj. 警觉的,灵敏的
n. 警戒,警报

联想记忆
coherent [kəu'hiərənt]

想一想再看

adj. 合理的,一贯的,明了的,粘着的,相干的

联想记忆
sentient ['senʃənt]

想一想再看

adj. 有知觉的,知悉的

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。