Yet it's also the imbrication of AI into existing systems that is cause for concern.
然而,令人担忧的正是人工智能融入现有系统的这种交织。
As Damien P Williams, a professor at the University of North Carolina at Charlotte, pointed out to me, training models absorb masses of data based on what is and what has been.
正如北卡罗来纳大学夏洛特分校的教授达米安·P·威廉姆斯向我指出的那样,训练模型会根据现在和过去的情况吸收大量数据。
It's thus hard for them to avoid existing biases, of the past and the present.
因此,他们很难避免过去和现在存在的偏见。
Williams points to how, if asked to reproduce, say, a doctor yelling at a nurse, AI will make the doctor a man and the nurse a woman.
威廉姆斯指出,如果要求人工智能重现,比如说,一个医生对护士大喊大叫的场景,它会把医生设定为男性,而把护士设定为女性。
Last year, when Google hastily released Gemini, its competitor to other AI chatbots, it produced images of “diverse” Nazis and the US's founding fathers.
去年,当谷歌匆忙发布其与其他人工智能聊天机器人竞争的产品Gemini时,它生成了“多样化”的纳粹和美国开国元勋的图像。
These odd mistakes were a ham-fisted attempt to try to pre-empt the problem of bias in the training data.
这些奇怪的错误是一种笨拙的尝试,试图预先解决训练数据中的偏差问题。
AI relies on what has been, and trying to account for the myriad ways we encounter and respond to the prejudice of the past appears to simply be beyond its ken.
人工智能依赖于已经存在的东西,而试图解释我们遇到和应对过去偏见的无数方式似乎超出了它的理解范围。
The structural problem with bias has existed for some time.
带有偏见的结构性问题已经存在了一段时间。
Algorithms were already used for things like credit scores, and already AI usage in things like hiring is replicating biases.
算法已经被用于信用评分等方面,而人工智能在招聘等方面的使用已经在复制偏见。
In both cases, pre-existing racial bias emerged in digital systems.
在这两种情况下,数字系统中都出现了预先存在的种族偏见。
That's not to say that AI won't also kill us.
这并不是说人工智能不会也杀死我们。
More recently, it was revealed that Israel was using a version of AI called Lavender to help it attack targets in Gaza.
最近,有消息透露,以色列正在使用一种名为薰衣草的人工智能版本来帮助其攻击加沙的目标。
The system is meant to identify members of Hamas and Palestinian Islamic jihad and then provide their locations as potential targets for airstrikes – including their homes.
该系统旨在识别哈马斯和巴勒斯坦伊斯兰圣战组织的成员,然后提供他们的位置,作为空袭的潜在目标——包括他们的家。
According to the Israeli Palestinian +972 Magazine, many of these attacks killed civilians.
据以色列巴勒斯坦的+972杂志报道,这些袭击中有许多杀害了平民。
As such, the threat of AI isn't really that of a machine or system which offhandedly kills humanity.
因此,人工智能的威胁并不是那种机器或系统会随意杀死人类的威胁。
It's the assumption that AI is in fact intelligent that causes us to outsource crucial social and political functions to computer software – it's not just the tech itself which becomes integrated into day-to-day life but also the particular logic and ethos of tech and its libertarian-capitalist ideology.
正是认为人工智能实际上是智能的这一假设,导致我们将关键的社会和政治功能外包给计算机软件——不仅是技术本身融入了日常生活,而且技术的特定逻辑和精神及其自由资本主义意识形态也融入了日常生活。
The question is then: to what ends AI is deployed, in what context, and with what boundaries.
那么问题就是:人工智能被部署用于何种目的、在何种背景下以及有哪些限制。
“Can AI be used to make cars drive themselves?” is an interesting question.
“人工智能可以用来让汽车自动驾驶吗?”是一个有趣的问题。
But whether we should allow self-driving cars on the road, under what conditions, embedded in what systems – or indeed, whether we should deprioritise the car altogether – are the more important questions, and they are ones that an AI system cannot answer for us.
但是,我们是否应该允许自动驾驶汽车上路,在什么条件下,嵌入什么系统——或者实际上,我们是否应该完全不重视汽车——这些都是更重要的问题,而这些问题是人工智能系统无法为我们回答的。