How Conspiracy Talk Helps People Make Sense of the World

From “Pizzagate” to claims the moon landing was faked and the 9/11 attacks being an inside job, conspiracy theories have long swirled around key global events and individuals. In the early days of Covid-19, online platforms became inundated with various possible explanations for the pandemic that diverged from the science-backed narrative espoused by the World Health Organization and government officials.

Among some of the popular – and downright bizarre – conspiracy theories were that the virus was a bioweapon spread intentionally by Bill Gates, Anthony Fauci or the Chinese government, that infection statistics were being artificially inflated by fake testing, and that malicious actors were using 5G towers to infect individuals.

My co-authors* and I initiated our study in response to the proliferation of such conspiracy theories through online social networks. Despite the countless deaths and ongoing danger to public health, dubious narratives were spreading misinformation that, if believed to be true, could discourage people from taking the necessary precautions to protect themselves and those around them.

Indeed, while conspiracy narratives about historical events – such as who “really” assassinated President John F. Kennedy – are one thing, those that spread falsehoods about how democracy functions today or an ongoing global pandemic are far more worrisome and problematic. Through our research, we hoped to gain a better understanding of the discursive and social structure of online conspiracy groups, how conspiracy theories are propagated, and how to combat them.

How conspiracy theories emerge

We focused our investigation on Twitter, where many Covid-19 conspiracy theories were being spread in the early days of the pandemic. We harnessed Twitter’s public application programming interface to collect approximately 700,000 Covid-19 tweets from 8,000 users dating from the start of the pandemic until July 2020. We also distinguished between human and bot accounts using Botometer, a widely used Twitter bot classification algorithm.

We identified conspiracy theories in users’ tweets using unsupervised topic models and uncovered 13 distinct topics. The conspiracy theories aggregated into two categories: talk claiming the virus was a hoax or an exaggerated threat (e.g., that testing resulted in false positives or hospitals were secretly empty) and talk describing it as a bioweapon being spread intentionally (e.g., by Bill Gates, the Chinese or a world-controlling cabal).

We found that users first peddled gateway conspiracy theories before progressing on to more extreme ones. Gateway conspiracy theories are relatively minor, less extreme and more plausible narratives, such as those that acknowledge Covid-19 as real and dangerous but postulate that the threat is exaggerated (e.g., false-positive testing and miscounted deaths). Based on our data, a user’s first conspiracy theory was a gateway narrative 40 percent of the time. Once a user tweeted their first conspiracy theory, the likelihood they would make additional ones increased.

Another finding was that users did not stick to a single conspiracy theory. Instead, they increased the number and diversified the type of conspiracy narratives in response to receiving significant attention in the form of retweets and engagement from others, and as the threat (in this case, Covid-19 infections) increased in severity. This was the case even if the conspiracy theories being propagated were logically incompatible – some users even spread multiple contradictory conspiracy theories in a single tweet.

Conspiracy theories and sensemaking

The pandemic has been a period of immense uncertainty and fear. Humans generally don’t like the idea of being hunted and killed – especially by a tiny, mindless virus that is invisible to the naked eye. As a result, people rely on their world views to defend themselves, cope with vulnerability and insulate themselves from threats. The appeal of conspiracy theories is that they provide congruent knowledge about the world and can help individuals resolve perceived gaps in official accounts about the virus.

We suggest that certain individuals adopted Covid-19 conspiracy theories as part of a sensemaking process to deal with this fear. For example, claims that the virus was manufactured and released by Bill Gates or the Chinese government not only offer a more concrete explanation, but also eliminate the notion of a mindless hunter in the form of the virus, thereby providing relief from the threat.

Our results reveal that users adopted new conspiracy theories when the danger intensified from a rising Covid-19 case rate, suggesting that these narratives helped them cope with the virus threat. Individuals also often spread incompatible conspiracy theories, which suggests that they were not revising their existing belief set. Instead, such contradictions occurred as they tried to make sense of their reality.

How and why does this happen? An individual using a conspiracy theory as a shield against reality might find friends or acquaintances challenging it, leading to the possibility of facing social resistance and even having to accept unpleasant facts. Multiple conspiracy theories can be deployed flexibly and allow users to protect their version of reality. There is also an important community element, with people increasingly tweeting conspiracy theories after getting attention and peer approval through retweets.

Ultimately, users’ firmest belief isn’t in any one of the conspiracy theories – it is in the denial of the scientific world. For reality denial, conspiracy theories are superior to factual explanations, and many conspiracy theories are better than one. The conspiracy narratives do not need to be compatible with each other because they are used selectively depending on circumstances, and hence will not be tested for compatibility in any given interaction.

Stemming the spread

Our results challenge traditional notions of conspiracy talk as being located within specific types of individuals or in small and closed social circles, such as cults or echo chambers. Today, it plays out in the open forums of Twitter and similar online social platforms. We suggest that those engaging in these conspiracy narratives are often ordinary individuals seeking to create and sustain meaning, and that regular people can be susceptible or vulnerable to conspiracy theories.

What this means is that conventional ways of stemming disinformation and fighting falsehoods are ineffective for combating conspiracy theories as they exist today. Supplying facts does not work as the whole point is reality denial. Gateway narratives consisting of seemingly benign ideas provide a low barrier to entry and lure individuals in, which leads to them gradually picking up more extreme beliefs. Furthermore, given the lack of a single charismatic central figure peddling these narratives – as with a cult – it is impossible to discredit the core of the organisation. On Twitter, that charismatic centre changes from week to week, depending on which user receives the most retweets.

As a start, social media companies can train their algorithms to ensure that conspiracy theories aren’t being boosted or recommended and to filter out misinformation and fake news. Many of them such as YouTube are already doing this, though with varying degrees of success. Attacking conspiracy theories with facts is not going to be all that effective, which means that fact-checking systems such as Twitter’s Community Notes may be useful for the average user, but not for those engaging in reality denial.

The long-term and ideal way to do this is through education, the whole point of which is to make people think seriously about facts. It is not difficult to train individuals to recognise when something is conspiracy talk – the stories are typically elaborate, the logic is often flawed, and taking out any piece of the narrative can cause everything to collapse. Additionally, corrective messaging that attacks gateway conspiracy theories can offer a promising pathway to deal with misinformation, as people at this stage have yet to fully detach from reality and may be more receptive to these interventions than those that are already in too deep.

*Hayagreeva Rao, the Atholl McBean Professor of Organisational Behaviour and Human Resources at Stanford Graduate School of Business; and Paul Vicinanza and Echo Yan Zhou, PhD students at Stanford Graduate School of Business.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy