Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media.
more »
« less
FeedReflect: A Tool for Nudging Users to Assess News Credibility on Twitter
In recent years, the emergence of fake news outlets has drawn out the importance of news literacy. This is particularly critical in social media where the flood of information makes it difficult for people to assess the veracity of the false stories from such deceitful sources. Therefore, people oftentimes fail to look skeptically at these stories. We explore a way to circumvent this problem by nudging users into making conscious assessments of what online contents are credible. For this purpose, we developed FeedReflect, a browser extension. The extension nudges users to pay more attention and uses reflective questions to engage in news credibility assessment on Twitter. We recruited a small number of university students to use this tool on Twitter. Both qualitative and quantitative analysis of the study suggests the extension helped people accurately assess the credibility of news. This implies FeedReflect can be used for the broader audience to improve online news literacy.
more »
« less
- PAR ID:
- 10082960
- Date Published:
- Journal Name:
- CSCW '18 Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing
- Page Range / eLocation ID:
- 205 to 208
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Misinformation on social media has become a serious concern. Marking news stories with credibility indicators, possibly generated by an AI model, is one way to help people combat misinformation. In this paper, we report the results of two randomized experiments that aim to understand the effects of AI-based credibility indicators on people's perceptions of and engagement with the news, when people are under social influence such that their judgement of the news is influenced by other people. We find that the presence of AI-based credibility indicators nudges people into aligning their belief in the veracity of news with the AI model's prediction regardless of its correctness, thereby changing people's accuracy in detecting misinformation. However, AI-based credibility indicators show limited impacts on influencing people's engagement with either real news or fake news when social influence exists. Finally, it is shown that when social influence is present, the effects of AI-based credibility indicators on the detection and spread of misinformation are larger as compared to when social influence is absent, when these indicators are provided to people before they form their own judgements about the news. We conclude by providing implications for better utilizing AI to fight misinformation.more » « less
-
In an increasingly information-dense web, how do we ensure that we do not fall for unreliable information? To design better web literacy practices for assessing online information, we need to understand how people perceive the credibility of unfamiliar websites under time constraints. Would they be able to rate real news websites as more credible and fake news websites as less credible? We investigated this research question through an experimental study with 42 participants (mean age = 28.3) who were asked to rate the credibility of various “real news” (n = 14) and “fake news” (n = 14) websites under different time conditions (6s, 12s, 20s), and with a different advertising treatment (with or without ads). Participants did not visit the websites to make their credibility assessments; instead, they interacted with the images of website screen captures, which were modified to remove any mention of website names, to avoid the effect of name recognition. Participants rated the credibility of each website on a scale from 1 to 7 and in follow-up interviews provided justifications for their credibility scores. Through hypothesis testing, we find that participants, despite limited time exposure to each website (between 6 and 20 seconds), are quite good at the task of distinguishing between real and fake news websites, with real news websites being overall rated as more credible than fake news websites. Our results agree with the well-known theory of “first impressions” from psychology, that has established the human ability to infer character traits from faces. That is, participants can quickly infer meaningful visual and content cues from a website, that are helping them make the right credibility evaluation decision.more » « less
-
Cryptographic tools for authenticating the provenance of web-based information are a promising approach to increasing trust in online news and information. However, making these tools’ technical assurances sufficiently usable for news consumers is essential to realizing their potential. We conduct an online study with 160 participants to investigate how the presentation (visual vs. textual) and location (on a news article page or a third-party site) of the provenance information affects news consumers’ perception of the content’s credibility and trustworthiness, as well as the usability of the tool itself. We find that although the visual presentation of provenance information is more challenging to adopt than its text-based counterpart, this approach leads its users to put more faith in the credibility and trustworthiness of digital news, especially when situated internally to the news article.more » « less
-
Sparked by a collaboration between academic researchers and science media professionals, this study sought to test three commonly used headline formats that vary based on whether (and, if so, how) important information is left out of a headline to encourage participants to read the corresponding article; these formats are traditionally-formatted headlines, forward-referencing headlines, and question-based headlines. Although headline format did not influence story selection or engagement, it did influence participants evaluations of both the headline’s and the story’s credibility (question-based headlines were viewed as the least credible). Moreover, individuals’ science curiosity and political views predicted their engagement with environmental stories as well as their views about the credibility of the headline and story. Thus, headline formats appear to play a significant role in audience’s perceptions of online news stories, and science news professionals ought to consider the effects different formats have on readers.more » « less