Files

Abstract

Disinformation on social media—commonly called “fake news”—has become a major concern around the world, and many fact-checking initiatives have been launched to mitigate the problem. The way fact-checking results are presented to social media users is important because if the presentation format is not persuasive, fact checking may not be effective. For instance, Facebook tested the idea of flagging dubious articles in 2017 but concluded that it was ineffective and subsequently removed the feature. We conducted three experiments with social media users to investigate two different approaches to implementing a fake news flag, one designed to have its primary effect when processed by automatic cognition (System 1) and the other designed to have its primary effect when processed by deliberate cognition (System 2). We found that both interventions were effective, and an intervention that combined both approaches was about twice as effective. We also found that awareness training on the meaning of the flags increased the effectiveness of the System 2 intervention, but not the System 1 intervention, exactly as theory predicts. Believability, in turn, influenced the extent to which users would engage with the article (e.g., read, like, comment, and share). Our results suggest that both theoretical routes can be used—separately or together—in the presentation of fact-checking results in order to reduce the influence of fake news on the users.

Details

PDF

Statistics

from
to
Export
Download Full History