skip to main content


Title: Platform Effects on Alternative Influencer Content: Understanding How Audiences and Channels Shape Misinformation Online
People are increasingly exposed to science and political information from social media. One consequence is that these sites play host to “alternative influencers,” who spread misinformation. However, content posted by alternative influencers on different social media platforms is unlikely to be homogenous. Our study uses computational methods to investigate how dimensions we refer to as audience and channel of social media platforms influence emotion and topics in content posted by “alternative influencers” on different platforms. Using COVID-19 as an example, we find that alternative influencers’ content contained more anger and fear words on Facebook and Twitter compared to YouTube. We also found that these actors discussed substantively different topics in their COVID-19 content on YouTube compared to Twitter and Facebook. With these findings, we discuss how the audience and channel of different social media platforms affect alternative influencers’ ability to spread misinformation online.  more » « less
Award ID(s):
2027375
NSF-PAR ID:
10302789
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Political Science
Volume:
3
ISSN:
2673-3145
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The global spread of the novel coronavirus is affected by the spread of related misinformation—the so-called COVID-19 Infodemic—that makes populations more vulnerable to the disease through resistance to mitigation efforts. Here, we analyze the prevalence and diffusion of links to low-credibility content about the pandemic across two major social media platforms, Twitter and Facebook. We characterize cross-platform similarities and differences in popular sources, diffusion patterns, influencers, coordination, and automation. Comparing the two platforms, we find divergence among the prevalence of popular low-credibility sources and suspicious videos. A minority of accounts and pages exert a strong influence on each platform. These misinformation “superspreaders” are often associated with the low-credibility sources and tend to be verified by the platforms. On both platforms, there is evidence of coordinated sharing of Infodemic content. The overt nature of this manipulation points to the need for societal-level solutions in addition to mitigation strategies within the platforms. However, we highlight limits imposed by inconsistent data-access policies on our capability to study harmful manipulations of information ecosystems. 
    more » « less
  2. Abstract

    We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.

     
    more » « less
  3. Abstract

    Perceived experts (i.e. medical professionals and biomedical scientists) are trusted sources of medical information who are especially effective at encouraging vaccine uptake. The role of perceived experts acting as potential antivaccine influencers has not been characterized systematically. We describe the prevalence and importance of antivaccine perceived experts by constructing a coengagement network of 7,720 accounts based on a Twitter data set containing over 4.2 million posts from April 2021. The coengagement network primarily broke into two large communities that differed in their stance toward COVID-19 vaccines, and misinformation was predominantly shared by the antivaccine community. Perceived experts had a sizable presence across the coengagement network, including within the antivaccine community where they were 9.8% of individual, English-language users. Perceived experts within the antivaccine community shared low-quality (misinformation) sources at similar rates and academic sources at higher rates compared to perceived nonexperts in that community. Perceived experts occupied important network positions as central antivaccine users and bridges between the antivaccine and provaccine communities. Using propensity score matching, we found that perceived expertise brought an influence boost, as perceived experts were significantly more likely to receive likes and retweets in both the antivaccine and provaccine communities. There was no significant difference in the magnitude of the influence boost for perceived experts between the two communities. Social media platforms, scientific communications, and biomedical organizations may focus on more systemic interventions to reduce the impact of perceived experts in spreading antivaccine misinformation.

     
    more » « less
  4. Redbird, Beth ; Harbridge-Yong, Laurel ; Mersey, Rachel Davis (Ed.)
    In our analysis, we examine whether the labelling of social media posts as misinformation affects the subsequent sharing of those posts by social media users. Conventional understandings of the presentation-of-self and work in cognitive psychology provide different understandings of whether labelling misinformation in social media posts will reduce sharing behavior. Part of the problem with understanding whether interventions will work hinges on how closely social media interactions mirror other interpersonal interactions with friends and associates in the off-line world. Our analysis looks at rates of misinformation labelling during the height of the COVID-19 pandemic on Facebook and Twitter, and then assesses whether sharing behavior is deterred misinformation labels applied to social media posts. Our results suggest that labelling is relatively successful at lowering sharing behavior, and we discuss how our results contribute to a larger understanding of the role of existing inequalities and government responses to crises like the COVID-19 pandemic. 
    more » « less
  5. Misinformation runs rampant on social media and has been tied to adverse health behaviors such as vaccine hesitancy. Crowdsourcing can be a means to detect and impede the spread of misinformation online. However, past studies have not deeply examined the individual characteristics - such as cognitive factors and biases - that predict crowdworker accuracy at identifying misinformation. In our study (n = 265), Amazon Mechanical Turk (MTurk) workers and university students assessed the truthfulness and sentiment of COVID-19 related tweets as well as answered several surveys on personal characteristics. Results support the viability of crowdsourcing for assessing misinformation and content stance (i.e., sentiment) related to ongoing and politically-charged topics like the COVID-19 pandemic, however, alignment with experts depends on who is in the crowd. Specifically, we find that respondents with high Cognitive Reflection Test (CRT) scores, conscientiousness, and trust in medical scientists are more aligned with experts while respondents with high Need for Cognitive Closure (NFCC) and those who lean politically conservative are less aligned with experts. We see differences between recruitment platforms as well, as our data shows university students are on average more aligned with experts than MTurk workers, most likely due to overall differences in participant characteristics on each platform. Results offer transparency into how crowd composition affects misinformation and stance assessment and have implications on future crowd recruitment and filtering practices. 
    more » « less