A tricky bit of magic
by Tom Sullivan
Like a difficult potion brewed at Hogwarts, synthesizing a conscience is tricky. A con man might have more luck playing at being president than a computer programmed by scientists might immitate a saint.
Even deciding what values to prioritize in a machine conscience, if one could model them, would speak volumes about our own. Honor, self-sacrifice, charity, wisdom? What values would a mass-market consumer product prioritize anyway?
Assuming all the external sensors function properly, driverless cars will have no problem with reaction time. What they do with it in a life-or-death emergency requiring a snap judgement is something else. But where there is consumer will, there will be a way. In 2016, Media Lab began collecting data via a Web site game called Moral Machine. You’re the driver. An emergency arises. Pedestrians are at risk. Do you select for killing the fewest pedestrians? Which ones? Do you kill yourself and your passengers instead? Do you do nothing?
The New Yorker examines the quest not for a car with a brain, but with a moral conscience:
The paper on the project was published in Nature, in October, 2018, and the results offer an unlikely window into people’s values around the globe. On the whole, players showed little preference between action and inaction, which the scientists found surprising. “From the philosophical . . . and legal perspective . . . this question is very important,” Shariff explained. But the players showed strong preferences for what kinds of people they hit. Those preferences were determined, in part, by where the players were from. Edmond Awad, a research fellow, and Sohan Dsouza, a graduate student working with Rahwan, noticed that the responses could be grouped into three large geographic “clusters”: the Western cluster, including North America and Western Europe; the Eastern cluster, which was a mix of East Asian and Islamic countries; and the Southern cluster, which was composed of Latin-American countries and a smattering of Francophone countries.
We should be wary of drawing broad conclusions from the geographical differences, particularly because about seventy per cent of the respondents were male college graduates. Still, the cultural differences were stark. Players in Eastern-cluster countries were more likely than those in the Western and Southern countries to kill a young person and spare an old person (represented, in the game, by a stooped figure holding a cane). Players in Southern countries were more likely to kill a fat person (a figure with a large stomach) and spare an athletic person (a figure that appeared mid-jog, wearing shorts and a sweatband). Players in countries with high economic inequality (for example, in Venezuela and Colombia) were more likely to spare a business executive (a figure walking briskly, holding a briefcase) than a homeless person (a hunched figure with a hat, a beard, and patches on his clothes). In countries where the rule of law is particularly strong—like Japan or Germany—people were more likely to kill jaywalkers than lawful pedestrians. But, even with these differences, universal patterns revealed themselves. Most players sacrificed individuals to save larger groups. Most players spared women over men. Dog-lovers will be happy to learn that dogs were more likely to be spared than cats. Human-lovers will be disturbed to learn that dogs were more likely to be spared than criminals.
We cannot even decide as a culture whether or not it is moral to torture prisoners or cage migrant babies snatched from the arms of their mothers or let people die because they cannot afford health insurance. This should be no problem at all.
Should scientists program their vehicles to be culturally sensitive or risk “a form of moral colonialism”? Should manufacturers allow autocratic leaders to tweak the code? A German commission on driverless vehicles insists, “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” Would other countries with different histories make that choice? But in a world where people make snap judgments about others based on fleeting impressions and subconscious biases, how does a computer sort that out that exactly? How do technicians code to avoid biases of which they themselves are unaware?
The Nature article acknowledges, “Asimov’s laws were not designed to solve the problem of universal machine ethics, and they were not even designed to let machines distribute harm between humans … And yet, we do not have the luxury of giving up on creating moral machines.” Says who?
Because nor were Assimov’s laws set up to decide whether or not because a thing might be done it should be done.
Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.
We’ve been here before and failed to do just that. Nearly 30 years ago (IIRC), Paul Harvey ran a noontime program in which he told of a millionaire couple that had perished when their private plane crashed. The childless couple had been trying to get pregnant through in vitro fertilization. The “heirs” to their fortune were frozen at the clinic. As Harvey told the tale, women began coming forward to volunteer to carry the little dears to term for a piece of the action. Technology chronically outruns our ethics.
Invariably, the moral algorithms of commercial products will reflect commercial imperatives. Consider that.