The Indutrial Revolution and globalization were child’s play
Danielle Allen was still at Princeton’s Institute for Advanced Study in 2008 when she raised red flags about anonymous viral emails attacking then-presidential candidate Sen. Barack Obama.
“I started thinking, ‘How does one stop it?’ ” Allen told the Washington Post:
Allen set her sights on dissecting the modern version of a whisper campaign, even though experts told her it would be impossible to trace the chain e-mail to its origin. Along the way, even as her hunt grew cold, she gained valuable insight into the way political information circulates, mutates and sometimes devastates in the digital age.
Now at Harvard, Allen is still warning about digital mayhem. Only now, her concern extends to “generative artificial intelligence, a tool that will help bad actors further accelerate the spread of misinformation.”
She’s signed onto an open letter with technologists, academics, and others calling for a six-month pause in “the training of AI systems more powerful than GPT-4.”
Email spam was bad enough. A.I.-generated disinformation could be worse, with “all kinds of unpredictable emergent properties and powers” spawned by the technology. Microsoft co-founder Bill Gates disagrees that we’re there yet, but Allen’s research suggests that the latest machine-learning models show “sparks” of artificial general intelligence. Microsoft Research teams concur (Washington Post):
But regardless of which side of the debate one comes down on, and whether the time has indeed come (as I think it has) to figure out how to regulate an intelligence that functions in ways we cannot predict, it is also the case that the near-term benefits and potential harms of this breakthrough are already clear, and attention must be paid. Numerous human activities — including many white-collar jobs — can now be automated. We used to worry about the impacts of AI on truck drivers; now it’s also the effects on lawyers, coders and anyone who depends on intellectual property for their livelihood. This advance will increase productivity but also supercharge dislocation.
The Industrial Revolution was just a foretaste. People relocated en masse from farms to factory jobs in cities. We are still living with the social, economic, and political fallout from automation, globalization, and vaporware promises of how there would be, in the end, more winners than losers. MAGA, anyone?
Allen offers a breathless selection of potential misuses of OpenAI that the technology recognizes and prohibits:
Illegal activity. Child sexual-abuse material. Generation of hateful, harassing or violent content. Generation of malware. Activity that has high risk of physical harm, including: weapons development; military and warfare; management or operation of critical infrastructure in energy, transportation and water; content that promotes, encourages or depicts acts of self-harm. Activity that has a high risk of economic harm, including: multilevel marketing, gambling, payday lending, automated determinations of eligibility for credit, employment, educational institutions or public assistance services. Fraudulent or deceptive activity, including: scams, coordinated inauthentic behavior, plagiarism, astroturfing, disinformation, pseudo-pharmaceuticals. Adult content. Political campaigning or lobbying by generating high volumes of campaign materials. Activities that violate privacy. Unauthorized practice of law or medicine or provision of financial advice.
Allen and other signatories on the letter are not Luddites. But they pose a classic question underlying gothic horror and science fiction and dating back to Mary Shelley: just because we can invent a new technology does not mean we should. At the very least, not without first planning to head off the fallout. They write:
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
“What’s the hurry?” Allen asks in the Post:
We are simply ill-prepared for the impact of yet another massive social transformation. We should avoid rushing into all of this with only a few engineers at a small number of labs setting the direction for all of humanity. We need a breather for some collective learning about what humanity has created, how to govern it, and how to ensure that there will be accountability for the creation and use of new tools.
There are already many things we can and should do. We should be making scaled-up public-sector investments into third-party auditing, so we can actually know what models are capable of and what data they’re ingesting. We need to accelerate a standards-setting process that builds on work by the National Institute of Standards and Technology. We must investigate and pursue “compute governance,” which means regulation of the use of the massive amounts of energy necessary for the computing power that drives the new models. This would be akin to regulating access to uranium for the production of nuclear technologies.
More than that, we need to strengthen the tools of democracy itself. A pause in further training of generative AI could give our democracy the chance both to govern technology and to experiment with using some of these new tools to improve governance. The Commerce Department recently solicited input on potential regulation for the new AI models; what if we used some of the tools the AI field is generating to make that public comment process even more robust and meaningful?
I’ve literally watched this movie before, time and time again. As have you.
I traded notes with Allen in 2008 about my growing collection of right-wing spam. I speculated at the time that there might be a boiler room somewhere generating chain emails. After social media spread disinformation during the 2016 presidential campaign, we found out there was one: in St. Petersburg.
Allen’s take on 2008’s e-rumors was this:
“What I’ve come to realize is, the labor of generating an e-mail smear is divided and distributed amongst parties whose identities are secret even to each other,” she says. A first group of people published articles that created the basis for the attack. A second group recirculated the claims from those articles without ever having been asked to do so. “No one coordinates the roles,” Allen said. Instead the participants swim toward their goal like a school of fish — moving on their own, but also in unison.
Now we face the prospect of A.I. destabilizing society, swimming in unison, untouched by human hands.
With luck, maybe climate change will take out humanity first.