Why humans underestimate the effects of misinformation

Florent Joly
5 min readFeb 17, 2021

--

What does misinformation have in common with climate change, Covid-19 or car accidents? They are all very real risks that humans consistently underestimate.

In a recent episode of NPR’s Hidden Brain titled ‘Afraid of the wrong things’, editor Shankar Vedantam talked with psychologist Paul Slovic to explore the root causes of this disconnect. Although there is no mention of misinformation or social media in the episode, it is easy to see how similar learnings can apply to our field.

Limitation 1: Humans are bad at assessing probability when feelings are involved

In an experiment that Slovic describes, participants were willing to pay $1 to avoid having to pay $20 at a low probability but they were willing to pay much more (close to $19) if the likelihood of losing $20 was close to 90%. So far, so rational. However, when the experiment added the possibility that participants receive an electric shock in addition to having to pay $20, the amount that participants were willing to pay to avoid the cost and the shock was similar in both the low probability and the high probability scenarii. This indicates that once feelings (or in this case, pain) are involved, humans struggle to differentiate between a low probability and a high probability scenario.

The parallel with misinformation? Humans may struggle to differentiate between a highly likely story and a highly unlikely one when the story appeals to their feelings.

Limitation 2: Humans estimate prevalence based on recency or availability

Most people overestimate the number of shark attacks but vastly underestimate the number of people who die from serious illness. In other words, shark attacks being more sensational have a higher perceived prevalence than deaths from serious illness.

The parallel with misinformation: the risks that misinformation poses are long term and hardly as sensational as shootings or shark attacks. This means people may fail to take adequate precautions.

Limitation 3: Humans have an optimism bias

People underestimate risk when it is a risk to them but they overestimate it when it is a risk to other people. How many smokers argue they can smoke ‘in the right way’ and it is other smokers, heavier smokers, who are at risk? How many drivers argue they can drive safely and it is the more reckless drivers who put themselves and others in danger?

The parallel with misinformation: Social media users typically consider themselves to be good at differentiating real from fake and pass the blame to other people.

Limitation 4: Humans are bad at assessing risk that compounds over time

Climate change, Covid-19 and car safety are all risks that compound slowly over time. A single trip without wearing a seatbelt has a low likelihood to result in injury but that risk becomes significant after 50,000 or more trips without a belt.

The parallel with misinformation: each like, comment or share on a post that contains misinformation may not feel like a big risk to society, but over weeks and months this risk compounds and can cause real-world harm.

Limitation 5: Humans are bad at calculating exponential growth

How many times would you have to fold an infinitely long sheet of paper onto itself in order to have a stack so tall you could reach the moon? If you answered somewhere in the thousands or millions, you’re not alone. Most people envision linear growth when asked to project a future value or risk. The accurate answer is 48, as the number of sides grows exponentially with every fold.

The parallel with misinformation: misinformation spreads exponentially every time a user shares it. A typical user will underestimate how many downstream views a single re-share or re-tweet can generate.

The misinformation crisis we face today is partly the result of psychological patterns that combine into a perfect storm. Misinformation, like viruses, feeds on our psychological limitations. What do these limitations tell us about potential solutions in the space?

Solution 1: User empowerment

The idea that we will “solve” misinformation by giving users more control over their social media experience- from fact checking labels and widgets to controls that help tune out of suspicious sources- is hard to defend in the face of the psychological limitations we outlined above. The lack of understanding of the risks that misinformation poses currently stands in the way of users taking full advantage of any tools made available to them.

Tools aimed at limiting the spread of misinformation may also have the opposite effect, as people accept a higher level of risk when they feel in control. To cite the metaphor Paul Slovic uses: people are willing to put their finger closer to the knife if they’re the one holding it.

Solution 2: Transparency & education

More user transparency and more education about the risks of misinformation is a tempting solution for platforms and governments alike. If only people could ‘visualize’ the speed at which misinformation travels, they would fact check what they see more. If only they could ‘learn’ to detach their feelings from the way they interact with content, they would think twice about liking a post.

This line of thinking does not only run the risk of being paternalistic- history shows it may not solve the problem by itself. Take safety belts as an example. In the 1970s, the US government ran massive public awareness campaigns to educate drivers about the risks of driving without a belt on. The campaign mostly failed and seat belt usage remained low- until laws were passed that made not wearing a belt a punishable offence.

Solution 3: Top-down enforcement

Dangers like misinformation, climate change or pandemics are systemic risks that exploit our psychological deficiencies. Tackling them at scale requires system thinking and the invention of new strict norms and rules that address the causes of the issues.

In the case of climate change, the most sustainable cities impose recycling or pass regulation that prevents the sale of single use plastic bags (to cite just one example). In the case of car safety, laws were introduced to regulate both driver behavior and car manufacturing norms.

By admitting that humans are inherently ill-equipped to deal by themselves with exponential risks, platforms and governments can envision solutions at the system level that protect users against themselves. User-level interventions like controls, transparency and education should be explored. However, if psychology and recent history are any indication, they should not distract us from exploring larger top-down reform.

--

--

Florent Joly

Exploring the intersection of technology and democracy.