Politics and science have become increasingly intertwined. Salient scientific issues, such as climate change, evolution, and stem-cell research, become politicized, pitting partisans against one another. This creates a challenge of how to effectively communicate on such issues. Recent work emphasizes the need for tailored messages to specific groups. Here, we focus on whether generalized messages also can matter. We do so in the context of a highly polarized issue: extreme COVID-19 vaccine resistance. The results show that science-based, moral frame, and social norm messages move behavioral intentions, and do so by the same amount across the population (that is, homogeneous effects). Counter to common portrayals, the politicization of science does not preclude using broad messages that resonate with the entire population.
more » « less- PAR ID:
- 10482956
- Publisher / Repository:
- British Journal of Political Science
- Date Published:
- Journal Name:
- British Journal of Political Science
- Volume:
- 53
- Issue:
- 2
- ISSN:
- 0007-1234
- Page Range / eLocation ID:
- 698 to 706
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Young children and science are a perfect fit, and now more than ever they need a strong foundation of science literacy. Early childhood science educator Cindy Hoisington makes a case for increasing quality science instruction in the early grades so that our next generation will become curious observers ready to tackle complex science issues and challenges.more » « less
-
Generative Artificial Intelligence (AI) tools, including language models that can create novel content, have shown promise for science communication at scale. However, these tools may show inaccuracies and biases, particularly against marginalized populations. In this work, we examined the potential of GPT-4, a generative AI model, in creating content for climate science communication. We analyzed 100 messages generated by GPT-4 that included descriptors of intersecting identities, science communication mediums, locations, and climate justice issues. Our analyses revealed that community awareness and actions emerged as prominent themes in the generated messages, while systemic critiques of climate justice issues were not as present. We discuss how intersectional lenses can help researchers to examine the underlying assumptions of emerging technology before its integration into learning contexts.more » « less
-
BACKGROUND Effective communication is crucial during health crises, and social media has become a prominent platform for public health experts to inform and to engage with the public. At the same time, social media also platforms pseudo-experts who may promote contrarian views. Despite the significance of social media, key elements of communication such as the use of moral or emotional language and messaging strategy, particularly during the COVID-19 pandemic, has not been explored.
OBJECTIVE This study aims to analyze how notable public health experts (PHEs) and pseudo-experts communicated with the public during the COVID-19 pandemic. Our focus is the emotional and moral language they used in their messages across a range of pandemic issues. We also study their engagement with political elites and how the public engaged with PHEs to better understand the impact of these health experts on the public discourse.
METHODS We gathered a dataset of original tweets from 489 PHEs and 356 pseudo- experts on Twitter (now X) from January 2020 to January 2021, as well as replies to the original tweets from the PHEs. We identified the key issues that PHEs and pseudo- experts prioritized. We also determined the emotional and moral language in both the original tweets and the replies. This approach enabled us to characterize key priorities for PHEs and pseudo-experts, as well as differences in messaging strategy between these two groups. We also evaluated the influence of PHE language and strategy on the public response.
RESULTS Our analyses revealed that PHEs focus on masking, healthcare, education, and vaccines, whereas pseudo-experts discuss therapeutics and lockdowns more frequently. PHEs typically used positive emotional language across all issues, expressing optimism and joy. Pseudo-experts often utilized negative emotions of pessimism and disgust, while limiting positive emotional language to origins and therapeutics. Along the dimensions of moral language, PHEs and pseudo-experts differ on care versus harm, and authority versus subversion, across different issues. Negative emotional and moral language tends to boost engagement in COVID-19 discussions, across all issues. However, the use of positive language by PHEs increases the use of positive language in the public responses. PHEs act as liberal partisans: they express more positive affect in their posts directed at liberals and more negative affect directed at conservative elites. In contrast, pseudo-experts act as conservative partisans. These results provide nuanced insights into the elements that have polarized the COVID-19 discourse.
CONCLUSIONS Understanding the nature of the public response to PHE’s messages on social media is essential for refining communication strategies during health crises. Our findings emphasize the need for experts to consider the strategic use of moral and emotional language in their messages to reduce polarization and enhance public trust.
-
Abstract A diffuse and interdisciplinary field, risk communication research, is founded on how we understand the process and purpose of communication more generally. To that end, this article outlines two fundamental functions of risk communication: (1) a pragmatic function, in which senders direct messages at audiences (and vice versa), with various intended (and sometimes unintended) effects; and (2) a constitutive function, in which messages re(create) what we mean by “risk” in a given social context, including how we can, and/or should relate to it. Although representing distinct epistemological and theoretical social scientific traditions, these functions necessarily coexist in a broader understanding of risk communication, including its so‐called “effectiveness.” The article concludes by considering how we might enact this fuller understanding of risk communication's dual functions through engagement in collaborative, sustainability science‐oriented research.
-
Cremonini, Marco (Ed.)Understanding the spread of false or dangerous beliefs—often called misinformation or disinformation—through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual’s set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and “alternative facts.”more » « less