Social and political bots have a small but strategic role in Venezuelan political conversations. These automated scripts generate content through social media platforms and then interact with people. In this preliminary study on the use of political bots in Venezuela, we analyze the tweeting, following and retweeting patterns for the accounts of prominent Venezuelan politicians and prominent Venezuelan bots. We find that bots generate a very small proportion of all the traffic about political life in Venezuela. Bots are used to retweet content from Venezuelan politicians but the effect is subtle in that less than 10 percent of all retweets come from bot-related platforms. Nonetheless, we find that the most active bots are those used by Venezuela’s radical opposition. Bots are pretending to be political leaders, government agencies and political parties more than citizens. Finally, bots are promoting innocuous political events more than attacking opponents or spreading misinformation. 
                        more » 
                        « less   
                    
                            
                            Suspensions of prominent accounts minimally impact platform engagement
                        
                    
    
            Health-related misinformation online poses threats to individual well-being and undermines public health efforts. In response, many social media platforms have temporarily or permanently suspended accounts that spread misinformation, at the risk of losing traffic vital to platform revenue. Here we examine the impact on platform engagement following removal of six prominent accounts during the COVID-19 pandemic. Focused on those who engaged with the removed accounts, we find that suspension did not meaningfully reduce activity on the platform. Moreover, we find that removal of the prominent accounts minimally impacted the diversity of information sources consumed. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2120496
- PAR ID:
- 10524489
- Publisher / Repository:
- SocArxiv
- Date Published:
- Format(s):
- Medium: X
- Institution:
- University of Washington
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)The global spread of the novel coronavirus is affected by the spread of related misinformation—the so-called COVID-19 Infodemic—that makes populations more vulnerable to the disease through resistance to mitigation efforts. Here, we analyze the prevalence and diffusion of links to low-credibility content about the pandemic across two major social media platforms, Twitter and Facebook. We characterize cross-platform similarities and differences in popular sources, diffusion patterns, influencers, coordination, and automation. Comparing the two platforms, we find divergence among the prevalence of popular low-credibility sources and suspicious videos. A minority of accounts and pages exert a strong influence on each platform. These misinformation “superspreaders” are often associated with the low-credibility sources and tend to be verified by the platforms. On both platforms, there is evidence of coordinated sharing of Infodemic content. The overt nature of this manipulation points to the need for societal-level solutions in addition to mitigation strategies within the platforms. However, we highlight limits imposed by inconsistent data-access policies on our capability to study harmful manipulations of information ecosystems.more » « less
- 
            Abstract Misinformation online poses a range of threats, from subverting democratic processes to undermining public health measures. Proposed solutions range from encouraging more selective sharing by individuals to removing false content and accounts that create or promote it. Here we provide a framework to evaluate interventions aimed at reducing viral misinformation online both in isolation and when used in combination. We begin by deriving a generative model of viral misinformation spread, inspired by research on infectious disease. By applying this model to a large corpus (10.5 million tweets) of misinformation events that occurred during the 2020 US election, we reveal that commonly proposed interventions are unlikely to be effective in isolation. However, our framework demonstrates that a combined approach can achieve a substantial reduction in the prevalence of misinformation. Our results highlight a practical path forward as misinformation online continues to threaten vaccination efforts, equity and democratic processes around the globe.more » « less
- 
            Who's in the Crowd Matters: Cognitive Factors and Beliefs Predict Misinformation Assessment AccuracyMisinformation runs rampant on social media and has been tied to adverse health behaviors such as vaccine hesitancy. Crowdsourcing can be a means to detect and impede the spread of misinformation online. However, past studies have not deeply examined the individual characteristics - such as cognitive factors and biases - that predict crowdworker accuracy at identifying misinformation. In our study (n = 265), Amazon Mechanical Turk (MTurk) workers and university students assessed the truthfulness and sentiment of COVID-19 related tweets as well as answered several surveys on personal characteristics. Results support the viability of crowdsourcing for assessing misinformation and content stance (i.e., sentiment) related to ongoing and politically-charged topics like the COVID-19 pandemic, however, alignment with experts depends on who is in the crowd. Specifically, we find that respondents with high Cognitive Reflection Test (CRT) scores, conscientiousness, and trust in medical scientists are more aligned with experts while respondents with high Need for Cognitive Closure (NFCC) and those who lean politically conservative are less aligned with experts. We see differences between recruitment platforms as well, as our data shows university students are on average more aligned with experts than MTurk workers, most likely due to overall differences in participant characteristics on each platform. Results offer transparency into how crowd composition affects misinformation and stance assessment and have implications on future crowd recruitment and filtering practices.more » « less
- 
            This paper introduces and presents a first analysis of a uniquely curated dataset of misinformation, disinformation, and rumors spreading on Twitter about the 2020 U.S. election. Previous research on misinformation—an umbrella term for false and misleading content—has largely focused either on broad categories, using a finite set of keywords to cover a complex topic, or on a few, focused case studies, with increased precision but limited scope. Our approach, by comparison, leverages real-time reports collected from September through November 2020 to develop a comprehensive dataset of tweets connected to 456 distinct misinformation stories from the 2020 U.S. election (our ElectionMisinfo2020 dataset), 307 of which sowed doubt in the legitimacy of the election. By relying on real-time incidents and streaming data, we generate a curated dataset that not only provides more granularity than a large collection based on a finite number of search terms, but also an improved opportunity for generalization compared to a small set of case studies. Though the emphasis is on misleading content, not all of the tweets linked to a misinformation story are false: some are questions, opinions, corrections, or factual content that nonetheless contributes to misperceptions. Along with a detailed description of the data, this paper provides an analysis of a critical subset of election-delegitimizing misinformation in terms of size, content, temporal diffusion, and partisanship. We label key ideological clusters of accounts within interaction networks, describe common misinformation narratives, and identify those accounts which repeatedly spread misinformation. We document the asymmetry of misinformation spread: accounts associated with support for President Biden shared stories in ElectionMisinfo2020 far less than accounts supporting his opponent. That asymmetry remained among the accounts who were repeatedly influential in the spread of misleading content that sowed doubt in the election: all but two of the top 100 ‘repeat spreader’ accounts were supporters of then-President Trump. These findings support the implementation and enforcement of ‘strike rules’ on social media platforms, directly addressing the outsized role of repeat spreaders.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    