手机APP下载

您现在的位置: 首页 > 在线广播 > 科学美国人 > 科学美国人技术系列 > 正文

人工智能学会反驳偏执者

来源:可可英语 编辑:aimee   可可英语APP下载 |  可可官方微信:ikekenet
  
  • This is Scientific American's 60-second Science, I'm Christopher Intagliata.
  • 这里是科学美国人——60秒科学系列,我是克里斯托弗·因塔格里塔。
  • Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech.
  • 脸谱网等社交媒体平台会结合使用人工智能和人工版主来侦查和删除仇恨言论。
  • But now researchers have developed a new AI tool that wouldn't just scrub hate speech but would actually craft responses to it,
  • 但现在,研究人员开发出一种新型人工智能工具,它不仅能清除仇恨言论,还能对该言论做出回复,
  • like this: "The language used is highly offensive. All ethnicities and social groups deserve tolerance."
  • 比如:“这种语言使用非常无礼。所有种族和社会群体都应该得到宽容。”
  • "And this type of intervention response can hopefully short-circuit the hate cycles that we often get in these types of forums."
  • “这种介入回复有望阻断我们在这类论坛中经常看到的仇恨循环。”
  • Anna Bethke, a data scientist at Intel.
  • 英特尔公司的数据科学家安娜·贝斯克说到。
  • The idea, she says, is to fight hate speech with more speech—
  • 她表示,这一想法旨在用更多言论来对抗仇恨言论,
  • an approach advocated by the ACLU and the U.N. High Commissioner for Human Rights.
  • 这是美国公民自由联盟(简称ACLU)和联合国人权事务高级专员所倡导的方法。
  • So with her colleagues at U.C. Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit
  • 因此,贝斯克和她在加州大学圣巴巴分校的同事从Reddit网站上获取了5000多条对话,
  • and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.
  • 并从Gab网站上获得了近1.2万条对话,Gab网站是许多被推特屏蔽的用户喜欢用的网站。
  • The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations.
  • 研究人员让真人对Reddit和Gab对话中的仇恨言论编写样本回复。
  • Then they let natural-language-processing algorithms learn from the real human responses and craft their own,
  • 然后,他们让自然语言处理算法学习真人回复,并创作自已的回复,
  • such as: "I don't think using words that are sexist in nature contribute to a productive conversation."
  • 比如:“我认为使用本质上带有性别歧视的词语无助于形成有效对话。”
  • Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one:
  • 这听起来相当不错。但机器也会作出令人有些费解的回复,比如:
  • "This is not allowed and un time to treat people by their skin color."
  • “凭肤色待人是不允许的,也是不合时宜的。”
  • And when the scientists asked human reviewers to blindly choose between human responses and machine responses—
  • 当科学家要求真人审查员在人工回复和机器回复中进行盲选时……
  • well, most of the time, the humans won.
  • 嗯,大多数时候都是真人获胜。
  • The team published the results on the site Arxiv
  • 研究团队将研究结果发表在Arxiv网站上,
  • and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.
  • 该结果还将于下月在香港举行的“自然语言处理经验方法会议”上发表。
  • Ultimately, Bethke says, the idea is to spark more conversation.
  • 贝斯克表示,这个想法的最终目的是激发更多对话。
  • "And not just to have this discussion between a person and a bot
  • “不仅是人与机器之间的对话,
  • but to start to elicit the conversations within the communities themselves—between the people that might be being harmful and those they're potentially harming."
  • 还要开始引出可能受伤害者和可能伤人者的群体间对话。”
  • In other words, to bring back good ol' civil discourse?
  • 换句话说,目的是唤回良好的公民对话?
  • "Oh! I don't know if I'd go that far. But it sort of sounds like that's what I just proposed, huh?"
  • “哦!我不知道是否能走那么远。但这听起来就像是我的打算,哈?”
  • Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.
  • 谢谢大家收听科学美国人——60秒科学。我是克里斯托弗·因塔利亚塔。


扫描二维码进行跟读打分训练
.lKZJ[io]Pdt

j61XAV+0Av15CB2qx]

This is Scientific American's 60-second Science, I'm Christopher Intagliata.
Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech but would actually craft responses to it, like this: "The language used is highly offensive. All ethnicities and social groups deserve tolerance."
"And this type of intervention response can hopefully short-circuit the hate cycles that we often get in these types of forums."
Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speechan approach advocated by the ACLU and the U.N. High Commissioner for Human Rights.
So with her colleagues at U.C. Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit and nearly 12,000 more from Gaba social media site where many users banned by Twitter tend to resurface.

U(fvU(T4HFi

人工智能反驳偏执者.jpg
The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then they let natural-language-processing algorithms learn from the real human responses and craft their own, such as: "I don't think using words that are sexist in nature contribute to a productive conversation."
Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: "This is not allowed and un time to treat people by their skin color."
And when the scientists asked human reviewers to blindly choose between human responses and machine responseswell, most of the time, the humans won. The team published the results on the site Arxiv and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.
Ultimately, Bethke says, the idea is to spark more conversation.
"And not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselvesbetween the people that might be being harmful and those they're potentially harming."
In other words, to bring back good ol' civil discourse?
"Oh! I don't know if I'd go that far. But it sort of sounds like that's what I just proposed, huh?"
Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.

ZopCY2w(B!)Gk

O8B;*=[#n;7U^SnsT&fILYAUzp[%.YE33[Suyj9(+Id#sd

重点单词   查看全部解释    
response [ri'spɔns]

想一想再看

n. 回答,响应,反应,答复
n. [宗

联想记忆
conference ['kɔnfərəns]

想一想再看

n. 会议,会谈,讨论会,协商会

联想记忆
conversation [.kɔnvə'seiʃən]

想一想再看

n. 会话,谈话

联想记忆
tend [tend]

想一想再看

v. 趋向,易于,照料,护理

 
offensive [ə'fensiv]

想一想再看

adj. 令人不快的,侮辱的,攻击用的
n.

 
productive [prə'dʌktiv]

想一想再看

adj. 能生产的,有生产价值的,多产的

联想记忆
combination [.kɔmbi'neiʃən]

想一想再看

n. 结合,联合,联合体

联想记忆
military ['militəri]

想一想再看

adj. 军事的
n. 军队

联想记忆
discourse ['diskɔ:s,dis'kɔ:s]

想一想再看

n. 谈话,演讲
vi. 谈话,讲述

联想记忆
spat [spæt]

想一想再看

n. 贝卵,蚝卵,蚝仔 n. 鞋罩 n. 小争吵,轻打声

联想记忆

发布评论我来说2句

    最新文章

    可可英语官方微信(微信号:ikekenet)

    每天向大家推送短小精悍的英语学习资料.

    添加方式1.扫描上方可可官方微信二维码。
    添加方式2.搜索微信号ikekenet添加即可。