Skip to content

The persistence of misinformation

“The Persistence of Memory” by Salvador Dalí.

People do not vote their interests, they vote their identities, cognitive scientist George Lakoff wrote in “Moral Politics.” Despite that, lefty friends insist on deriding conservative voters for “voting against their best interests.” As better-informed, savvy consumers of policy, we progressives know what their best interests are and, for our part, always vote what’s best for Number One (rather than what’s best for the country). Right?

Brendan Nyhan, a Dartmouth College political scientist, finds that the persistence of misinformation in our age stems from, among cognitive limitations and social factors, the need “to defend or support some group identity or existing belief.” Identity plays a role in what information people accept and retain.

Max Fisher reports on Nyhan’s findings at the New York Times:

Put more simply, people become more prone to misinformation when three things happen. First, and perhaps most important, is when conditions in society make people feel a greater need for what social scientists call ingrouping — a belief that their social identity is a source of strength and superiority, and that other groups can be blamed for their problems.

As much as we like to think of ourselves as rational beings who put truth-seeking above all else, we are social animals wired for survival. In times of perceived conflict or social change, we seek security in groups. And that makes us eager to consume information, true or not, that lets us see the world as a conflict putting our righteous ingroup against a nefarious outgroup.

This need can emerge especially out of a sense of social destabilization. As a result, misinformation is often prevalent among communities that feel destabilized by unwanted change or, in the case of some minorities, powerless in the face of dominant forces.

We may not be lab rats, but that does not mean as social animals we do not respond to social cues and positive reinforcement. In a time of high social tensions, we shift into “identity-based conflict” mode and seek out information that affirms our sense of us vs. them. The appearance of “high-profile political figures” who encourage followers to accept “identity-affirming misinformation” is another factor in misinformation’s persistence. Social media’s feedback system of likes and shares provides tasty pellets “for posting inflammatory or false statements.”

In the case of QAnon, the game that plays people, game designer Reed Berkowitz believes the online community provides “a hit of dopamine, the brain’s pleasure drug, as a reward” for “players” deciphering “Q drops” or for producing “research” that reinforce the group’s narrative. Truth and accuracy take a back seat to getting that hit.

Sociologist Zeynep Tufekci wrote that in a period of high social tension, fact-checking does not always correct misinformation. “Belonging is stronger than facts.” Identity again.

Nyhan finds, however, that the presence of the widely publicized “backfire effect” in fact-checking is overstated. Corrective information may not reinforce false beliefs in every case. Indeed, subsequent research finds backfire effects are rare. It depends on how accurate information is presented. Nevertheless, “accuracy-increasing effects of corrective information like fact checks … frequently seem to decay or be overwhelmed by cues from elites and the media promoting more congenial but less accurate claims.” Not to mention that the people who most need the corrective information are less prone to seeking it.

News coverage then should avoid partisan cues when addressing false claims. Fact checkers, journalists, and science communicators, Nyhan writes, should also take an intermediary approach that staunches the flow of misinformation from influencers who spread it. They should target political elites with heavier sanctions for spreading false information:

One field experiment found that state legislators who were sent reminders of the reputational threat posed by fact checkers in their state were less likely to make claims that were fact checked or whose accuracy was questioned publicly (87). There are many potential ways of accomplishing this goal. For example, providing fact-check statistics showing that a politician has repeatedly made false statements is more damaging to their standing with the public than a fact check of a single false claim (88).

Put their identities are on the line when they spread disinformation/misinformation. Thus:

Providing corrective information is generally worthwhile and can often improve belief accuracy on the margin, but durably reducing misperceptions will often require changing the cues that people receive from the sources that they most trust. Doing so will in turn require journalists and science communicators to focus less on communicating directly to the public and more on the intermediaries that are most credible to people who hold or are vulnerable to false beliefs.

FYI.

Published inUncategorized