skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Memory for tweets versus headlines: Does message consistency matter?
Abstract

People routinely use news outlets and social media platforms to keep up with recent events. While information from these common sources often aligns in the messages conveyed, news headlines and microblogs on social media also frequently provide contradictory messages. In this study, we examined how people recall and recognize tweets and news headlines when these sources provide inconsistent messaging. We tested this question in person (Experiment 1) and online (Experiment 2). Participants studied news headlines and tweets that provided either consistent messaging or inconsistent messaging, then completed a free recall and recognition memory task sequentially, and provided confidence ratings for recognition judgments. Findings were similar across memory tasks and experiments: Participants had better memory for tweets than news headlines regardless of message consistency. We discuss the implications of these findings for understanding memory in the digital age where social media use is widespread and messaging across sources is often inconsistent.

 
more » « less
PAR ID:
10430961
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Applied Cognitive Psychology
Volume:
37
Issue:
4
ISSN:
0888-4080
Page Range / eLocation ID:
p. 768-784
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media. 
    more » « less
  2. null (Ed.)
    Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, “true” and “false” tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines’ accuracy. Providing fact-checks after headlines ( debunking ) improved subsequent truth discernment more than providing the same information during ( labeling ) or before ( prebunking ) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers. 
    more » « less
  3. Abstract

    The efficacy of fake news corrections in improving memory and belief accuracy may depend on how often adults see false information before it is corrected. Two experiments tested the competing predictions that repeating fake news before corrections will either impair or improve memory and belief accuracy. These experiments also examined whether fake news exposure effects would differ for younger and older adults due to age-related differences in the recollection of contextual details. Younger and older adults read real and fake news headlines that appeared once or thrice. Next, they identified fake news corrections among real news headlines. Later, recognition and cued recall tests assessed memory for real news, fake news, if corrections occurred, and beliefs in retrieved details. Repeating fake news increased detection and remembering of corrections, correct real news retrieval, and erroneous fake news retrieval. No age differences emerged for detection of corrections, but younger adults remembered corrections better than older adults. At test, correct fake news retrieval for earlier-detected corrections was associated with better real news retrieval. This benefit did not differ between age groups in recognition but was greater for younger than older adults in cued recall. When detected corrections were not remembered at test, repeated fake news increased memory errors. Overall, both age groups believed correctly retrieved real news more than erroneously retrieved fake news to a similar degree. These findings suggest that fake news repetition effects on subsequent memory accuracy depended on age differences in recollection-based retrieval of fake news and that it was corrected.

     
    more » « less
  4. BACKGROUND

    Effective communication is crucial during health crises, and social media has become a prominent platform for public health experts to inform and to engage with the public. At the same time, social media also platforms pseudo-experts who may promote contrarian views. Despite the significance of social media, key elements of communication such as the use of moral or emotional language and messaging strategy, particularly during the COVID-19 pandemic, has not been explored.

    OBJECTIVE

    This study aims to analyze how notable public health experts (PHEs) and pseudo-experts communicated with the public during the COVID-19 pandemic. Our focus is the emotional and moral language they used in their messages across a range of pandemic issues. We also study their engagement with political elites and how the public engaged with PHEs to better understand the impact of these health experts on the public discourse.

    METHODS

    We gathered a dataset of original tweets from 489 PHEs and 356 pseudo- experts on Twitter (now X) from January 2020 to January 2021, as well as replies to the original tweets from the PHEs. We identified the key issues that PHEs and pseudo- experts prioritized. We also determined the emotional and moral language in both the original tweets and the replies. This approach enabled us to characterize key priorities for PHEs and pseudo-experts, as well as differences in messaging strategy between these two groups. We also evaluated the influence of PHE language and strategy on the public response.

    RESULTS

    Our analyses revealed that PHEs focus on masking, healthcare, education, and vaccines, whereas pseudo-experts discuss therapeutics and lockdowns more frequently. PHEs typically used positive emotional language across all issues, expressing optimism and joy. Pseudo-experts often utilized negative emotions of pessimism and disgust, while limiting positive emotional language to origins and therapeutics. Along the dimensions of moral language, PHEs and pseudo-experts differ on care versus harm, and authority versus subversion, across different issues. Negative emotional and moral language tends to boost engagement in COVID-19 discussions, across all issues. However, the use of positive language by PHEs increases the use of positive language in the public responses. PHEs act as liberal partisans: they express more positive affect in their posts directed at liberals and more negative affect directed at conservative elites. In contrast, pseudo-experts act as conservative partisans. These results provide nuanced insights into the elements that have polarized the COVID-19 discourse.

    CONCLUSIONS

    Understanding the nature of the public response to PHE’s messages on social media is essential for refining communication strategies during health crises. Our findings emphasize the need for experts to consider the strategic use of moral and emotional language in their messages to reduce polarization and enhance public trust.

     
    more » « less
  5. The increased social media usage in modern history instigates data collection from various users with different backgrounds. Mass media has been a rich source of information and might be utilized for countless purposes, from business and personal to political determination. Because more people tend to express their opinions through social media platforms, researchers are excited to collect data and use it as a free survey tool on what the public ponders about a particular issue. Because of the detrimental effect of news on social networks, many irresponsible users generate and promote fake news to influence public belief on a specific issue. The U.S. presidential election has been a significant and popular event, so both parties invest and extend their efforts to pursue and win the general election. Undoubtedly, spreading and promoting fake news through social media is one of the ways negligent individuals or groups sway societies toward their goals. This project examined the impact of removing fake tweets to predict the electoral outcomes during the 2020 general election. Eliminating mock tweets has improved the correctness of model prediction from 74.51 percent to 86.27 percent with the electoral outcomes of the election. Finally, we compared classification model performances with the highest model accuracy of 99.74634 percent, precision of 99.99881 percent, recall of 99.49430 percent, and an F1 score of 99.74592 percent. The study concludes that removing fake tweets improves the correctness of the model with the electoral outcomes of the U.S. election. 
    more » « less