环球观点:“AI教父”离开谷歌,前方危险预警

哔哩哔哩   2023-05-03 01:56:23

原文链接:https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

翻译:小野母喵   

仅供学习,侵删。文末附原文


(资料图)

“AI教父”离开谷歌,前方危险预警半个世纪以来,杰弗里·辛顿(Geoffrey Hinton)培育了ChatGPT等聊天机器人的核心技术。现在他担心这会造成严重的伤害。

杰弗里·辛顿(Geoffrey Hinton)是人工智能的先驱。2012年,辛顿博士和他在多伦多大学的两名研究生创造的技术成为人工智能系统的智能基础,科技巨头们相信这项技术将会是它们未来发展的关键。

然而,本周一,辛顿博士正式加入了越来越多的批评人士的行列,这些批评人士认为,这些科技公司正朝着危险前进,它们积极地开发基于生成式人工智能(generative artificial intelligence)的产品,生成式人工智能正是ChatGPT等流行的聊天机器人的驱动技术。

辛顿博士称,他已经辞掉了谷歌的工作。他在谷歌工作了10多年,并成为了该领域最受尊敬的意见领袖之一。现在他可以自由地谈论人工智能的风险问题了。他说,他内心有一部分自我现在对自己一生的工作感到后悔。

“我用一个惯用的借口安慰自己说:如果我不做这项工作,也会有其他人来做的。”辛顿博士上周在多伦多家中的餐厅接受采访时说,那里距离他和他的学生取得AI技术突破的地方只有几步之遥。

辛顿博士从人工智能的奠基人到末日预言者的历程,标志着科技行业在几十年来最重要的拐点上的一个非凡时刻。业界领袖认为,新的人工智能系统可能与20世纪90年代初推出的网络浏览器一样重要,并可能导致从药物研究到教育等领域的突破。

但是,让许多业内人士感到不安的是,人工智能系统正在向外界释放危险的东西。生成式人工智能已经成为制造错误信息的工具。很快,它可能会造就失业风险。科技界最大的担忧者认为,在未来的某个时候,人工智能系统可能会对人类构成威胁。

“很难想象你如何能阻止坏人利用它(AI系统)做坏事。”辛顿博士说。

在旧金山的初创公司OpenAI于3月发布了新版本的ChatGPT之后,1000多名技术领袖和研究人员签署了一封公开信,呼吁暂停新系统开发六个月,因为人工智能技术对 "社会和人类构成了深刻的威胁”。

几天后,有40年历史的学术团体国际先进人工智能协会(Association for the Advancement of Artificial Intelligence)的19位现任和前任领导人发表了他们自己的公开信,警告人工智能的风险。微软(Microsoft)首席科学官埃里克·霍维茨(Eric Horvitz)也是协会的一员。微软已将OpenAI的技术应用于一系列产品,包括必应(Bing)搜索引擎。

常被称为 "人工智能教父 "的辛顿博士没有签署这两封公开信,他说在辞职之前不想公开批评谷歌或其他公司。他上个月通知公司他要辞职,周四,他与谷歌母公司Alphabet的首席执行官桑达尔·皮查伊(Sundar Pichai)通了电话。他拒绝公开讨论他与皮查伊的谈话细节。

谷歌的首席科学家杰夫·迪恩(Jeff Dean)在一份声明中表示:“我们仍然致力于对人工智能采取负责任的态度。我们在不断学习理解新出现的风险的同时,也在大胆创新。”

现年75岁的辛顿博士是英国侨民,一生从事学术工作,他对人工智能发展和使用的个人信念推动了他的职业生涯。1972年,作为爱丁堡大学(University of Edinburgh)的一名研究生,辛顿博士接受了一种名为“神经网络”的观点。神经网络是一种数学系统,这种系统通过分析数据来学习技能。当时,很少有研究人员相信这个观点。但这成了辛顿博士毕生的事业。

20世纪80年代,辛顿博士是卡内基梅隆大学的计算机科学教授,但他离开大学前往加拿大,因为他说他不愿意接受五角大楼的资助。当时,美国的大多数人工智能研究是由美国国防部资助的。辛顿博士坚决反对在战场上使用人工智能--他称之为 "机器人士兵”。

2012年,辛顿博士和他在多伦多的两名学生伊尔亚·苏茨克维(Ilya Sutskever)和亚历克斯·克里斯基(Alex Krishevsky)建立了一个神经网络,这个神经网络可以分析成千上万的照片,并自学识别常见物体,比如花、狗和汽车。

谷歌斥资4400万美元收购了一家由辛顿博士和他的两个学生创办的公司。他们的系统催生了越来越强大的技术,包括ChatGPT和谷歌Bard等新型聊天机器人。Sutskever后来成为OpenAI的首席科学家。2018年,辛顿博士和另外两位长期合作者(杨立昆和约书亚·本吉奥)因在神经网络方面的工作获得了图灵奖,这一奖项通常被称为“计算界的诺贝尔奖”。

大约在同一时间,谷歌、OpenAI和其他公司开始构建从大量数字文本中学习的神经网络。辛顿博士认为,这是机器理解和生成语言的一种强大方式,但它比人类处理语言的方式要差。

去年,随着谷歌和OpenAI构建了使用大量数据的系统,他的观点发生了变化。他仍然认为这些系统在某些方面不如人脑,但他认为它们在其他方面正在使人类的智能黯然失色。他说:“也许这些系统中发生的事情,实际上比大脑中发生的事情要好得多。”

他认为,随着公司对人工智能系统的改进,它们会变得越来越危险。“看看它(AI)五年前和现在的情况,”他在谈到人工智能技术时说。“如果按照这样的速度发展下去。会很可怕。”

直到去年,他说,谷歌一直是这项技术的“合格管理者”,小心谨慎地避免发布可能会造成伤害的东西。但是,现在微软已经通过增加聊天机器人改进了必应搜索引擎,来挑战谷歌的核心业务,谷歌正在竭力部署同样类型的技术。这些科技巨头陷入了一场可能无法停止的竞争,辛顿博士说。

他目前的担忧是互联网上将被大量虚假的照片、视频和文本淹没,普通人将“不再能够知道什么是真实的。”

他还担心,人工智能技术会在一段时间内颠覆就业市场。现在,像ChatGPT这样的聊天机器人往往是对人类工作者的补充,但它们可以取代律师助理、个人助理、翻译和其他处理机械工作的人。他说:“这会减轻乏味的工作,但可能不仅仅如此。”

未来,他担心技术的未来版本会对人类构成威胁,因为AI会从它们分析的大量数据中学习出意想不到的行为。他说,当个人和公司不仅允许人工智能系统生成自己的计算机代码,而且允许实际运行那些代码时,这会成为一个问题。而且他担心有一天,真正的自主武器--那些杀手机器人--会成为现实。

“这些东西(AI)实际上可能比人更聪明的想法——一部分人相信这一点,”他说,“但其中大多数人也认为这还远远没有到来。我也认为这还很远,大概需要30到50年甚至更长时间。但显然,我现在不再这样认为了。”

包括他的学生和同事在内的许多其他专家表示,这种威胁是假设性的。但辛顿博士认为,谷歌、微软和其他公司之间的竞争将升级为全球竞争,如果没有某种全球监管,这种竞争将不会停止。

但他说,这可能是不可能的,与核武器不同,我们无法知道公司或国家是否在秘密研究该技术。最好的希望是世界顶尖的科学家合作研究控制该技术的方法。"我认为在他们弄清楚他们能否控制它(AI)之前,他们不应该进一步扩大这个规模。”他说。

辛顿博士说,当人们过去问他如何能够去研究有潜在危险的技术时,他会引用曾领导美国制造原子弹的罗伯特·奥本海默(Robert Oppenheimer)的话:“当你看到某种技术上令人兴奋的东西时,你就会去做。”

但现在他不这么说了。

附原文:

‘The Godfather of A.I.’ Leaves Google and Warns of Danger AheadFor half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.

On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.

Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.

After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”

Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.

Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”

Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”

In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

Google spent $44 million to acquire a company started by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.

Around the same time, Google, OpenAI and other companies began building neural networks that learned from huge amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans handled language.

Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.

His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”

He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”

Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.

But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.

Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”

He does not say that anymore.

相关资讯
最新资讯