Artificial intelligence (AI) is adding to the threat of election disinformation worldwide.
人工智能正在加剧全球选举虚假信息的威胁。
The technology makes it easy for anyone with a smartphone and an imagination to create fake – but convincing – content aimed at fooling voters.
这项技术使得任何拥有智能手机和想象力的人都可以轻松地制作出旨在愚弄选民的虚假但令人信服的内容。
Just a few years ago, fake photos, videos or audio required teams of people with time, skill and money.
就在几年前,伪造照片、视频或音频需要有时间、技能和金钱的团队。
Now, free and low-cost generative artificial intelligence services from companies like Google and OpenAI permit people to create high-quality "deepfakes" with just a simple text entry.
如今,谷歌和OpenAI等公司提供的免费、低成本的生成性人工智能服务让人们只需输入简单的文本就可以生成高质量的“深度伪造”产品。
A wave of AI deepfakes tied to elections in Europe and Asia has already appeared on social media for months.
几个月来,社交媒体上已经出现了一波与欧洲和亚洲选举有关的人工智能深度伪造信息。
It served as a warning for more than 50 countries having elections this year.
这对今年举行选举的50多个国家起到了警示作用。
Some recent examples of AI deepfakes include:
最近的一些人工智能深度伪造的例子包括:
— A video of Moldova's pro-Western president throwing her support behind a political party friendly to Russia.
一段摩尔多瓦亲西方总统支持亲俄政党的视频。
— Audio of Slovakia's liberal party leader discussing changing ballots and raising the price of beer.
一段斯洛伐克自由党领袖讨论改变选票和提高啤酒价格的音频。
— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.
一段视频显示了在保守的穆斯林占多数的国家孟加拉国,一名反对派议员穿着比基尼。
The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Henry Ajder, who runs a business advisory company called Latent Space Advisory in Britain.
在英国经营着一家名为Latent Space Advisory的商业咨询公司的亨利·艾德尔表示,问题不再是人工智能深度伪造是否会影响选举,而是它们的影响力有多大。
"You don't need to look far to see some people ... being clearly confused as to whether something is real or not," Ajder said.
艾德尔说:“你不需要费很大力气就能看到一些人……显然会对某事物是真是假感到困惑。”
As the U.S. presidential race comes closer, Christopher Wray, the director of the Federal Bureau of Investigation issued a warning about the growing threat of generative AI.
随着美国总统大选的临近,美国联邦调查局局长克里斯托弗·雷就生成式人工智能日益增长的威胁发出了警告。
He said the technology makes it easy for foreign groups to attempt to have a bad influence on elections.
他说,这项技术使得外国组织很容易试图对选举产生不良影响。
With AI deepfakes, a candidate's image can be made much worse or much better.
有了人工智能深度伪造,候选人的形象可能会变得更糟也可能会变得更好。
Voters can be moved toward or away from candidates — or even to avoid the polls altogether.
选民可以支持或反对候选人,甚至可以完全避免投票。
But perhaps the greatest threat to democracy, experts say, is that the growth of AI deepfakes could hurt the public's trust in what they see and hear.
但专家表示,民主面临的最大威胁可能是,人工智能深度伪造的增长可能会损害公众对所见所闻的信任。
The complexity of the technology makes it hard to find out who is behind AI deepfakes.
这项技术的复杂性使得人们很难找出人工智能深度伪造的幕后黑手。
Experts say governments and companies are not yet capable of stopping the problem.
专家表示,政府和公司还没有能力阻止这个问题。
The world's biggest tech companies recently — and voluntarily — signed an agreement to prevent AI tools from disrupting elections.
世界上最大的几家科技公司最近自愿签署了一项协议,以防止人工智能工具扰乱选举。
For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its services.
例如,拥有照片墙和脸书的公司表示,其将开始标记出现在其服务平台上的深度伪造产品。
But deepfakes are harder to limit on apps like Telegram, which did not sign the voluntary agreement.
但在Telegram等没有签署自愿协议的应用程序上,深度伪造更难限制。
Telegram uses encrypted messages that can be difficult to uncover.
Telegram使用的加密信息可能很难被发现。
Some experts worry that efforts to limit AI deepfakes could lead to unplanned results.
一些专家担心,为限制人工智能深度伪造而采取的措施可能会导致计划外的结果。
Tim Harper is an expert at the Center for Democracy and Technology in Washington, DC.
蒂姆·哈珀是华盛顿特区民主与技术中心的专家。
He said sometimes well-meaning governments or companies might crush the "very thin" line between political commentary and an "illegitimate attempt to smear a candidate."
他说,有时善意的政府或公司可能会粉碎政治评论和“非法诽谤候选人”之间的“非常细微的”界线。
Major generative AI services have rules to limit political disinformation.
主要的生产式人工智能服务都有限制政治虚假信息的规定。
But experts say it is too easy to defeat the restrictions or use other services.
但专家表示,要突破这些限制或使用其他服务太容易了。
AI software is not the only threat.
人工智能软件并不是唯一的威胁。
Candidates themselves could try to fool voters by claiming events that show them in bad situations were manufactured by AI.
候选人自己可能会试图欺骗选民,声称那些表明他们处于糟糕情况的事件是人工智能生成的。
Lisa Reppell is a researcher at the International Foundation for Electoral Systems in Arlington, Virginia.
丽莎·雷佩尔是弗吉尼亚州阿灵顿国际选举制度基金会的研究员。
She said, "A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that's really challenging for…democracy."
她说:“一个一切都可疑的世界,每个人都可以选择自己相信的东西,也是一个真正挑战民主的世界。”
I'm John Russell.
约翰·拉塞尔为您播报。
译文为可可英语翻译,未经授权请勿转载!