Information manipulation is widespread in today’s media environment. Online networks have disrupted the gatekeeping role of traditional media by allowing various actors to influence the public agenda; they have also allowed automated accounts (or bots) to blend with human activity in the flow of information. Here, we assess the impact that bots had on the dissemination of content during two contentious political events that evolved in real time on social media. We focus on events of heightened political tension because they are particularly susceptible to information campaigns designed to mislead or exacerbate conflict. We compare the visibility of bots with human accounts, verified accounts, and mainstream news outlets. Our analyses combine millions of posts from a popular microblogging platform with web-tracking data collected from two different countries and timeframes. We employ tools from network science, natural language processing, and machine learning to analyze the diffusion structure, the content of the messages diffused, and the actors behind those messages as the political events unfolded. We show that verified accounts are significantly more visible than unverified bots in the coverage of the events but also that bots attract more attention than human accounts. Our findings highlight that social media and the web are very different news ecosystems in terms of prevalent news sources and that both humans and bots contribute to generate discrepancy in news visibility with their activity.
more »
« less
Understanding the Effects of Large Language Model (LLM)-driven Adversarial Social Influences in Online Information Spread
Misinformation on social media poses significant societal challenges, particularly with the rise of large language models (LLMs) that can amplify its realism and reach. This study examines how adversarial social influence generated by LLM-powered bots affects people’s online information processing. Via a pre-registered, randomized human-subject experiment, we examined the effects of two types of LLM-driven adversarial influence: bots posting comments contrary to the news veracity and bots replying adversarially to human comments. Results show that both forms of influence significantly reduce participants’ ability to detect misinformation and discern true news from false. Additionally, adversarial comments were more effective than replies in discouraging the sharing of real news. The impact of these influences was moderated by political alignment, with participants more susceptible when the news conflicted with their political leanings. Guided by these findings, we conclude by discussing the targeted interventions to combat misinformation spread by adversarial social influences.
more »
« less
- Award ID(s):
- 2229876
- PAR ID:
- 10663424
- Publisher / Repository:
- ACM
- Date Published:
- Page Range / eLocation ID:
- 1 to 7
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Though significant efforts such as removing false claims and promoting reliable sources have been increased to combat COVID-19 misinfodemic, it remains an unsolved societal challenge if lacking a proper understanding of susceptible online users, i.e., those who are likely to be attracted by, believe and spread misinformation. This study attempts to answer who constitutes the population vulnerable to the online misinformation in the pandemic, and what are the robust features and short-term behavior signals that distinguish susceptible users from others. Using a 6-month longitudinal user panel on Twitter collected from a geopolitically diverse network-stratified samples in the US, we distinguish different types of users, ranging from social bots to humans with various level of engagement with COVID-related misinformation. We then identify users' online features and situational predictors that correlate with their susceptibility to COVID-19 misinformation. This work brings unique contributions: First, contrary to the prior studies on bot influence, our analysis shows that social bots' contribution to misinformation sharing was surprisingly low, and human-like users' misinformation behaviors exhibit heterogeneity and temporal variability. While the sharing of misinformation was highly concentrated, the risk of occasionally sharing misinformation for average users remained alarmingly high. Second, our findings highlight the political sensitivity activeness and responsiveness to emotionally-charged content among susceptible users. Third, we demonstrate a feasible solution to efficiently predict users' transient susceptibility solely based on their short-term news consumption and exposure from their networks. Our work has an implication in designing effective intervention mechanism to mitigate the misinformation dissipation.more » « less
-
Social and political bots have a small but strategic role in Venezuelan political conversations. These automated scripts generate content through social media platforms and then interact with people. In this preliminary study on the use of political bots in Venezuela, we analyze the tweeting, following and retweeting patterns for the accounts of prominent Venezuelan politicians and prominent Venezuelan bots. We find that bots generate a very small proportion of all the traffic about political life in Venezuela. Bots are used to retweet content from Venezuelan politicians but the effect is subtle in that less than 10 percent of all retweets come from bot-related platforms. Nonetheless, we find that the most active bots are those used by Venezuela’s radical opposition. Bots are pretending to be political leaders, government agencies and political parties more than citizens. Finally, bots are promoting innocuous political events more than attacking opponents or spreading misinformation.more » « less
-
In an era increasingly affected by natural and human-caused disasters, the role of social media in disaster communication has become ever more critical. Despite substantial research on social media use during crises, a significant gap remains in detecting crisis-related misinformation. Detecting deviations in information is fundamental for identifying and curbing the spread of misinformation. This study introduces a novel Information Switching Pattern Model to identify dynamic shifts in perspectives among users who mention each other in crisisrelated narratives on social media. These shifts serve as evidence of crisis misinformation affecting user-mention network interactions. The study utilizes advanced natural language processing, network science, and census data to analyze geotagged tweets related to compound disaster events in Oklahoma in 2022. The impact of misinformation is revealed by distinct engagement patterns among various user types, such as bots, private organizations, non-profits, government agencies, and news media throughout different disaster stages. These patterns show how different disasters influence public sentiment, highlight the heightened vulnerability of mobile home communities, and underscore the importance of education and transportation access in crisis response. Understanding these engagement patterns is crucial for detecting misinformation and leveraging social media as an effective tool for risk communication during disastersmore » « less
-
Twitter bot detection is vital in combating misinformation and safeguarding the integrity of social media discourse. While malicious bots are becoming more and more sophisticated and personalized, standard bot detection approaches are still agnostic to social environments (henceforth, communities) the bots operate at. In this work, we introduce community-specific bot detection, estimating the percentage of bots given the context of a community. Our method{---}BotPercent{---}is an amalgamation of Twitter bot detection datasets and feature-, text-, and graph-based models, adjusted to a particular community on Twitter. We introduce an approach that performs confidence calibration across bot detection models, which addresses generalization issues in existing community-agnostic models targeting individual bots and leads to more accurate community-level bot estimations. Experiments demonstrate that BotPercent achieves state-of-the-art performance in community-level Twitter bot detection across both balanced and imbalanced class distribution settings, presenting a less biased estimator of Twitter bot populations within the communities we analyze. We then analyze bot rates in several Twitter groups, including users who engage with partisan news media, political communities in different countries, and more. Our results reveal that the presence of Twitter bots is not homogeneous, but exhibiting a spatial-temporal distribution with considerable heterogeneity that should be taken into account for content moderation and social media policy making.more » « less
An official website of the United States government

