skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Designing misinformation interventions for all: Perspectives from AAPI, Black, Latino, and Native American community leaders on misinformation educational efforts
This paper examines strategies for making misinformation interventions responsive to four communities of color. Using qualitative focus groups with members of four non-profit organizations, we worked with community leaders to identify misinformation narratives, sources of exposure, and effective intervention strategies in the Asian American Pacific Islander (AAPI), Black, Latino, and Native American communities. Analyzing the findings from those focus groups, we identified several pathways through which misinformation prevention efforts can be more equitable and effective. Building from our findings, we propose steps practitioners, academics, and policymakers can take to better address the misinformation crisis within communities of color. We illustrate how these recommendations can be put into practice through examples from workshops co-designed with a non-profit working on disinformation and media literacy.  more » « less
Award ID(s):
2120098
PAR ID:
10438308
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Harvard Kennedy School Misinformation Review
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Understanding the mechanisms by which information and misinformation spread through groups of individual actors is essential to the prediction of phenomena ranging from coordinated group behaviors to misinformation epidemics. Transmission of information through groups depends on the rules that individuals use to transform the perceived actions of others into their own behaviors. Because it is often not possible to directly infer decision-making strategies in situ, most studies of behavioral spread assume that individuals make decisions by pooling or averaging the actions or behavioral states of neighbors. However, whether individuals may instead adopt more sophisticated strategies that exploit socially transmitted information, while remaining robust to misinformation, is unknown. Here, we study the relationship between individual decision-making and misinformation spread in groups of wild coral reef fish, where misinformation occurs in the form of false alarms that can spread contagiously through groups. Using automated visual field reconstruction of wild animals, we infer the precise sequences of socially transmitted visual stimuli perceived by individuals during decision-making. Our analysis reveals a feature of decision-making essential for controlling misinformation spread: dynamic adjustments in sensitivity to socially transmitted cues. This form of dynamic gain control can be achieved by a simple and biologically widespread decision-making circuit, and it renders individual behavior robust to natural fluctuations in misinformation exposure. 
    more » « less
  2. Past work has explored various ways for online platforms to leverage crowd wisdom for misinformation detection and moderation. Yet, platforms often relegate governance to their communities, and limited research has been done from the perspective of these communities and their moderators. How is misinformation currently moderated in online communities that are heavily self-governed? What role does the crowd play in this process, and how can this process be improved? In this study, we answer these questions through semi-structured interviews with Reddit moderators. We focus on a case study of COVID-19 misinformation. First, our analysis identifies a general moderation workflow model encompassing various processes participants use for handling COVID-19 misinformation. Further, we show that the moderation workflow revolves around three elements: content facticity, user intent, and perceived harm. Next, our interviews reveal that Reddit moderators rely on two types of crowd wisdom for misinformation detection. Almost all participants are heavily reliant on reports from crowds of ordinary users to identify potential misinformation. A second crowd--participants' own moderation teams and expert moderators of other communities--provide support when participants encounter difficult, ambiguous cases. Finally, we use design probes to better understand how different types of crowd signals---from ordinary users and moderators---readily available on Reddit can assist moderators with identifying misinformation. We observe that nearly half of all participants preferred these cues over labels from expert fact-checkers because these cues can help them discern user intent. Additionally, a quarter of the participants distrust professional fact-checkers, raising important concerns about misinformation moderation. 
    more » « less
  3. null (Ed.)
    Many visible public debates over scientific issues are clouded in accusations of falsehood, which place increasing demands on citizens to distinguish fact from fiction. Yet, constraints on our ability to detect misinformation coupled with our inadvertent motivations to believe false science result in a high likelihood that we will form misperceptions. As science falsehoods are often presented with emotional appeals, we focus our perspective on the roles of emotion and humor in the formation of science attitudes, perceptions, and behaviors. Recent research sheds light on how funny science and emotions can help explain and potentially overcome our inability or lack of motivation to recognize and challenge misinformation. We identify some lessons learned from these related and growing areas of research and conclude with a brief discussion of the ethical considerations of using persuasive strategies, calling for more dialogue among members of the science communication community. 
    more » « less
  4. Communities of color have been historically excluded and marginalized in the ongoing conversations about climate preparedness and resilience at local, national, and global levels. Using focus groups composed of Boston communities of color (Asian American, Black, Latino, and Native American), this study aimed to understand their perspectives on climate change, providing in-depth knowledge of its impact and their views on preparedness and resilience. Research shows that these communities have long been concerned about climate change and emphasize the urgent need to improve climate preparedness. A multi-pronged approach is crucial: listening to communities of color to leverage local knowledge and leadership, engaging in community organizing, advocating for policy change, redirecting attention to institutional resources, and addressing systemic inequalities that exacerbate vulnerabilities. The findings of this study highlight the need for policy changes driven by collaboration and collective action, which can benefit those most negatively impacted by climate change and the lack of preparedness and resilience in Boston and beyond. 
    more » « less
  5. null (Ed.)
    During COVID-19, misinformation on social media affects the adoption of appropriate prevention behaviors. It is urgent to suppress the misinformation to prevent negative public health consequences. Although an array of studies has proposed misinformation suppression strategies, few have investigated the role of predominant credible information during crises. None has examined its effect quantitatively using longitudinal social media data. Therefore, this research investigates the temporal correlations between credible information and misinformation, and whether predominant credible information can suppress misinformation for two prevention measures (i.e. topics), i.e. wearing masks and social distancing using tweets collected from February 15 to June 30, 2020. We trained Support Vector Machine classifiers to retrieve relevant tweets and classify tweets containing credible information and misinformation for each topic. Based on cross-correlation analyses of credible and misinformation time series for both topics, we find that the previously predominant credible information can lead to the decrease of misinformation (i.e. suppression) with a time lag. The research findings provide empirical evidence for suppressing misinformation with credible information in complex online environments and suggest practical strategies for future information management during crises and emergencies. 
    more » « less