skip to main content


Title: Reluctant to Share: How Third Person Perceptions of Fake News Discourage News Readers From Sharing “Real News” on Social Media
Rampant fake news on social media has drawn significant attention. Yet, much remains unknown as to how such imbalanced evaluations of self versus others could shape social media users’ perceptions and their subsequent attitudes and behavioral intentions regarding social media news. An online survey ( N = 335) was conducted to examine the third person effect (TPE) in fake news on social media and suggested that users perceived a greater influence of fake news on others than on themselves. However, although users evaluated fake news as socially undesirable, they were still unsupportive of government censorship as a remedy. In addition, the perceived prevalence of fake news leads audiences to reported significantly less willingness to share all news on social media either online or offline.  more » « less
Award ID(s):
2128642
NSF-PAR ID:
10274045
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Social Media + Society
Volume:
6
Issue:
3
ISSN:
2056-3051
Page Range / eLocation ID:
205630512095517
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In recent years, the emergence of fake news outlets has drawn out the importance of news literacy. This is particularly critical in social media where the flood of information makes it difficult for people to assess the veracity of the false stories from such deceitful sources. Therefore, people oftentimes fail to look skeptically at these stories. We explore a way to circumvent this problem by nudging users into making conscious assessments of what online contents are credible. For this purpose, we developed FeedReflect, a browser extension. The extension nudges users to pay more attention and uses reflective questions to engage in news credibility assessment on Twitter. We recruited a small number of university students to use this tool on Twitter. Both qualitative and quantitative analysis of the study suggests the extension helped people accurately assess the credibility of news. This implies FeedReflect can be used for the broader audience to improve online news literacy. 
    more » « less
  2. null (Ed.)
    Today social media has become the primary source for news. Via social media platforms, fake news travel at unprecedented speeds, reach global audiences and put users and communities at great risk. Therefore, it is extremely important to detect fake news as early as possible. Recently, deep learning based approaches have shown improved performance in fake news detection. However, the training of such models requires a large amount of labeled data, but manual annotation is time-consuming and expensive. Moreover, due to the dynamic nature of news, annotated samples may become outdated quickly and cannot represent the news articles on newly emerged events. Therefore, how to obtain fresh and high-quality labeled samples is the major challenge in employing deep learning models for fake news detection. In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i.e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection. The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector. The annotator can automatically assign weak labels for unlabeled news based on users' reports. The reinforced selector using reinforcement learning techniques chooses high-quality samples from the weakly labeled data and filters out those low-quality ones that may degrade the detector's prediction performance. The fake news detector aims to identify fake news based on the news content. We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports. Extensive experiments on this dataset show that the proposed WeFEND model achieves the best performance compared with the state-of-the-art methods. 
    more » « less
  3. Abstract

    Social media has been transforming political communication dynamics for over a decade. Here using nearly a billion tweets, we analyse the change in Twitter’s news media landscape between the 2016 and 2020 US presidential elections. Using political bias and fact-checking tools, we measure the volume of politically biased content and the number of users propagating such information. We then identify influencers—users with the greatest ability to spread news in the Twitter network. We observe that the fraction of fake and extremely biased content declined between 2016 and 2020. However, results show increasing echo chamber behaviours and latent ideological polarization across the two elections at the user and influencer levels.

     
    more » « less
  4. Obradovic, Zoran (Ed.)
    An efficient fake news detector becomes essential as the accessibility of social media platforms increases rapidly. Previous studies mainly focused on designing the models solely based on individual data sets and might suffer from degradable performance. Therefore, developing a robust model for a combined data set with diverse knowledge becomes crucial. However, designing the model with a combined data set requires extensive training time and sequential workload to obtain optimal performance without having some prior knowledge about the model's parameters. The presented study here will help solve these issues by introducing the unified training strategy to have a base structure for the classifier and all hyperparameters from individual models using a pretrained transformer model. The performance of the proposed model is noted using three publicly available data sets, namely ISOT and others from the Kaggle website. The results indicate that the proposed unified training strategy surpassed the existing models, such as Random Forests, convolutional neural networks, and long short-term memory, with 97% accuracy and achieved the F1 score of 0.97. Furthermore, there was a significant reduction in training time by almost 1.5 to 1.8 × by removing words lower than three letters from the input samples. We also did extensive performance analysis by varying the number of encoder blocks to build compact models and trained on the combined data set. We justify that reducing encoder blocks resulted in lower performance from the obtained results. 
    more » « less
  5. Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media. 
    more » « less