The more they warn, the less we’ll listen
Technology has a momentum all its own. It has a tendency to take us places before we consider whether they are places we need to or ought to go, I wrote here in 2014.
Following up on Danielle Allen’s warnings about artificial general intelligence, A.I. pioneer Dr. Geoffrey Hinton gets space in the New York Times to express his concerns:
Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
Hinton, “the Godfather of A.I.,” worries what his creation may do when loosed “into the wild,” as the Times’ Cade Metz puts it.
Allen signed onto a March open letter with technologists, academics, and others calling for a six-month pause in “the training of AI systems more powerful than GPT-4.” Days later, “19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I.”
Hinton signed neither, reluctant to go public with his concerns until he resigned from Google. Now he has.
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
But in Canada in 2012, Hinton and two assistants constructed a neural network for identifying photographs: flowers, dogs, cars, etc. Google came calling, checkbook in hand. More research, more improvements followed at Google and elsewhere.
Hinton now shares Allen’s concerns about the disruptive nature of A.I. But “disruption” is beneficial, to hear Silicon Valley tech bros tell it, “shorthand for something closer to techno-darwinism,” Nitasha Tiku warned at Wired in 2010. Sounds fine so long as you are not the one being selected for extinction. By them. She observed, “The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings.”
Uncertainty over where this technology goes next gnaws at Hinton:
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Metz adds, “Many other experts, including many of his students and colleagues, say this threat is hypothetical.” Threats always are until they’re not. Fierce competitors Microsoft and Google will not stop without global regulation. If that’s even possible.
Technology wants what it wants. The Market wants what it wants. The Corporation as well. All are human inventions so ubiquitous as to be invisible. Mary Shelley warned us.
“I don’t think they should scale this up more until they have understood whether they can control it,” Hinton said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
Hell, my plate is full just trying to stop Republicans.