Skip to content

What you count is what you get

Technology Review offers a cautionary tale not only for software developers but for designer/engineer/scientists/lawmakers of any kind. Our creations have a way of getting out of control, as Dr. Frankenstein discovered and as hundreds of movies have cemented in our collective subconscious. We just repeatedly fail to let little details like that get in the way of making a buck.

“How Facebook got addicted to spreading misinformation” chronicles the struggle by Joaquin Quiñonero Candela, a director of AI at Facebook, to rein in a technology that works best when it is not reined in. In fact, that is not merely its design but its foundational intention:

Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

“They always do just enough to be able to put the press release out. But with a few exceptions, I don’t think it’s actually translated into better policies. They’re never really dealing with the fundamental problems.”

Because the fundamental problem is not the design of the software, but design of the public corporation. Growth is its raison d’etre. Growing engagement is Facebook’s DNA. It is financial markets’ DNA. Corporate America’s DNA. There is no algorithm for fixing that. It’s baked into the business model. That little bit of technology is so ubiquitous as to be invisible.

If the business model results in “full-blown genocide” against Myanmar’s Rohingya Muslim minority, well, whoops.

It’s not that Candela and his Responsible AI team don’t want to solve the problem of online radicalization and make the world a better place. It’s that everything they try conflicts with Zuckerberg’s prime directive and the company’s DNA and gets shut down.

“It seems like the ‘responsible AI’ framing is completely subjective to what a company decides it wants to care about. It’s like, ‘We’ll make up the terms and then we’ll follow them,’” says Ellery Roberts Biddle, the editorial director of Ranking Digital Rights, a nonprofit that studies the impact of tech companies on human rights. “I don’t even understand what they mean when they talk about fairness. Do they think it’s fair to recommend that people join extremist groups, like the ones that stormed the Capitol? If everyone gets the recommendation, does that mean it was fair?”

“We’re at a place where there’s one genocide [Myanmar] that the UN has, with a lot of evidence, been able to specifically point to Facebook and to the way that the platform promotes content,” Biddle adds. “How much higher can the stakes get?”

To further illustrate, there is a one-page science-fiction short story from 1954 that is hard to forget: “Answer” by Fredric Brown.

Dwan Ev has just completed the final assembly of a galaxy-sized computer, “the monster computing machines of all the populated planets in the universe — ninety-six billion planets” would create a “supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.”

An associate has the honor of asking the monster machine its first question:

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of a single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

Wonder how many likes it would get today? Because that’s all we count.

Published inUncategorized