In the 2016 campaign leading to Brexit, the following statement was placed on the side of a bus:

“We send the EU £350 million a week. Let’s fund our NHS instead.”

This claim was refuted by the UK statistics authority, but a survey conducted two years later found that about 67% of the referendum voters had heard the misleading claim and, of those, 42% still believed it to be true [1].

The political use of misleading and even outright false information is nothing new but the main issue, now as in the past, is that these false claims may influence people’s choices and make them act in ways that go against their own interests. Still using the Brexit example, where “Leave” won by only 1.8%, it is not impossible that the misleading claim may have made a difference, but it might as easily have not. In fact, there are various alternative hypotheses: maybe the effect would have been the same had they used the official estimate of £136 million; maybe none of these claims would have made any difference; maybe the people who believed the misleading claim were people who were going to vote “Leave” anyway, for other reasons. This sort of limitation is frequent when looking into specific historical cases, but what, then, do scientific studies tell us about why people believe false information? Research has focused on three main individual factors: lack of knowledge, lack of cognitive reflection, and motivation.

Lack of knowledge is an intuitive first culprit for believing in and sharing false information. For example, for someone with a background in health sciences who also understands the principles of homeopathy, the claim that homeopathy could be an effective substitute for oxygen therapy, in severe COVID-19 cases, is easily considered to be false [2]. But for those who do not know much about COVID-19 or homeopathy, there is an open door for falsehood and fraud. Most experimental studies that have explored the relationship between knowledge and belief in false information have focused on short and simple interventions such as presenting fact-checking articles. These studies show, for example, that fact-checking articles produce a small but significant reduction in belief in political false information, although people’s preexisting knowledge, beliefs, and ideology attenuate the effect [3]. Thus, it is important to keep verifying claims and sharing information, particularly when the stakes are high, such as in politics or health.

A second factor in the belief of false information has to do with lack of cognitive reflection. If someone tells you: “I paid 1100€ for a computer and a chair. The computer cost 1000€ more than the chair, so the chair cost me 100€” maybe you, at first, find nothing wrong with the statement. But if you think about it for a while, you will probably realize the person is mistaken and that the chair’s true cost is 50€. To answer this problem correctly, people typically need to reflect on it and studies have found that this sort of cognitive reflection is important when it comes to dealing with false information. These studies, from a small group of researchers, indicate that people with a greater tendency to reflect might be better at distinguishing fake news from true news [4] and tend to share news from higher quality sources on Twitter [5]. But even people who follow sites that usually produce fake news can improve the quality of their shared content if asked to reflect on the accuracy of the information they see [6]. It is, thus, important to find ways to stimulate this type of reflection.

The third factor influencing false information belief seems to be motivation. Despite what we may want to believe about ourselves, we are not always motivated to identify the truth. Sometimes we want to reach a conclusion that is favourable to us, to groups we belong to, or to ideas we love. The research is scarce in experimental studies about how different types of motivations influence the belief in fake news, but the available correlational studies show that people are more likely to believe fake news that are favourable to the political party they identify with [7]. As ideology also seems to impact the effectiveness of fact-checking [3], motivation not only impacts belief in fake news but may also play a part in maintaining those false beliefs. Still, it is possible to argue that it’s not that ideology leads people to have different beliefs about the world, but that different beliefs about the world lead to different ideologies (or both). Further research is necessary on this. Regardless, next time you find yourself on a social media discussion, it does not hurt to refocus on truth by asking yourself: “If I were wrong, how could I know it?” This is especially important when sharing information with which we agree.

Another element that might play a role in this process is overconfidence: people who think very highly of themselves might be less likely to admit ignorance and doubt their first instincts or motivations. This is the main subject of my research project, and I hope to write more about it soon.

It is also important to note that this discussion, focused on people’s characteristics, largely ignores aspects of social media platforms that may increase the spread and impact of fake news belief. For example, it is very easy to find information online, which might lead to mis-calibrated perceptions of knowledge. In addition, platforms have design features that might reduce cognitive reflection or might be more likely to serve fake news to people who did not search for them yet share similar interests or demographics with people who do. Moreover, online opinions are very public, and this raises important questions regarding motivation, as the will to belong to a particular group is known to affect both online and offline behaviours.

Certainly, reducing belief in false information(s) requires a multipronged approach that deals with the platforms and the people that both produce and consume such information. Our responsibility as citizens is to pressure platforms and politicians to deal with the problem, while doing our best to scrutinise our “blind spots” and the information we share and believe in.