Abstract We analyze social media activity during one of the largest protest mobilizations in US history to examine ideological asymmetries in the posting of news content. Using an unprecedented combination of four datasets (tracking offline protests, social media activity, web browsing, and the reliability of news sources), we show that there is no evidence of unreliable sources having any prominent visibility during the protest period, but we do identify asymmetries in the ideological slant of the sources shared on social media, with a clear bias towards right-leaning domains. These results support the “amplification of the right” thesis, which points to the structural conditions (social and technological) that lead to higher visibility of content with a partisan bent towards the right. Our findings provide evidence that right-leaning sources gain more visibility on social media and reveal that ideological asymmetries manifest themselves even in the context of movements with progressive goals.
more »
« less
Whose Advantage? Measuring Attention Dynamics across YouTube and Twitter on Controversial Topics
The ideological asymmetries have been recently observed in contested online spaces, where conservative voices seem to be relatively more pronounced even though liberals are known to have the population advantage on digital platforms. Most prior research, however, focused on either one single platform or one single political topic. Whether an ideological group garners more attention across platforms and/or topics, and how the attention dynamics evolve over time, have not been explored. In this work, we present a quantitative study that links collective attention across two social platforms -- YouTube and Twitter, centered on online activities surrounding popular videos of three controversial political topics including Abortion, Gun control, and Black Lives Matter over 16 months. We propose several sets of video-centric metrics to characterize how online attention is accumulated for different ideological groups. We find that neither side is on a winning streak: left-leaning videos are overall more viewed, more engaging, but less tweeted than right-leaning videos. The attention time series unfold quicker for left-leaning videos, but span a longer time for right-leaning videos. Network analysis on the early adopters and tweet cascades show that the information diffusion for left-leaning videos tends to involve centralized actors; while that for right-leaning videos starts earlier in the attention lifecycle. In sum, our findings go beyond the static picture of ideological asymmetries in digital spaces and provide a set of methods to quantify attention dynamics across different social platforms.
more »
« less
- Award ID(s):
- 2027713
- PAR ID:
- 10334379
- Date Published:
- Journal Name:
- Proceedings of the International AAAI Conference on Weblogs and Social Media
- ISSN:
- 2334-0770
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Budak, Ceren; Cha, Meeyoung; Quercia, Daniele; Xie, Lexing (Ed.)We present the first large-scale measurement study of cross-partisan discussions between liberals and conservatives on YouTube, based on a dataset of 274,241 political videos from 973 channels of US partisan media and 134M comments from 9.3M users over eight months in 2020. Contrary to a simple narrative of echo chambers, we find a surprising amount of cross-talk: most users with at least 10 comments posted at least once on both left-leaning and right-leaning YouTube channels. Cross-talk, however, was not symmetric. Based on the user leaning predicted by a hierarchical attention model, we find that conservatives were much more likely to comment on left-leaning videos than liberals on right-leaning videos. Secondly, YouTube's comment sorting algorithm made cross-partisan comments modestly less visible; for example, comments from conservatives made up 26.3% of all comments on left-leaning videos but just over 20% of the comments were in the top 20 positions. Lastly, using Perspective API's toxicity score as a measure of quality, we find that conservatives were not significantly more toxic than liberals when users directly commented on the content of videos. However, when users replied to comments from other users, we find that cross-partisan replies were more toxic than co-partisan replies on both left-leaning and right-leaning videos, with cross-partisan replies being especially toxic on the replier's home turf.more » « less
-
Political news is often slanted toward its publisher’s ideology and seeks to influence readers by focusing on selected aspects of contentious social and political issues. We investigate political slants in news and their influence on readers by analyzing election-related news and reader reactions to the news on Twitter. To this end, we collected election-related news from six major US news publishers who covered the 2020 US presidential elections. We computed each publisher’s political slant based on the favorability of its news toward the two major parties’ presidential candidates. We found that the election-related news coverage shows signs of political slant both in news headlines and on Twitter. The difference in news coverage of the two candidates between the left-leaning (LEFT) and right-leaning (RIGHT) news publishers is statistically significant. The effect size is larger for the news on Twitter than for headlines. And, news on Twitter expresses stronger sentiments than the headlines. We identified moral foundations in reader reactions to the news on Twitter based on Moral Foundation Theory. Moral foundations in readers’ reactions to LEFT and RIGHT differ statistically significantly, though the effects are small. Further, these shifts in moral foundations differ across social and political issues. User engagement on Twitter is higher for RIGHT than for LEFT. We posit that an improved understanding of slant and influence can enable better ways to combat online political polarization.more » « less
-
Disinformation activities that aim to manipulate public opinion pose serious challenges to managing online platforms. One of the most widely used disinformation techniques is bot-assisted fake social engagement, which is used to falsely and quickly amplify the salience of information at scale. Based on agenda-setting theory, we hypothesize that bot-assisted fake social engagement boosts public attention in the manner intended by the manipulator. Leveraging a proven case of bot-assisted fake social engagement operation in a highly trafficked news portal, this study examines the impact of fake social engagement on the digital public’s news consumption, search activities, and political sentiment. For that purpose, we used ground-truth labels of the manipulator’s bot accounts, as well as real-time clickstream logs generated by ordinary public users. Results show that bot-assisted fake social engagement operations disproportionately increase the digital public’s attention to not only the topical domain of the manipulator’s interest (i.e., political news) but also to specific attributes of the topic (i.e., political keywords and sentiment) that align with the manipulator’s intention. We discuss managerial and policy implications for increasingly cluttered online platforms.more » « less
-
The Internet is home to thousands of communities, each with their own unique worldview and associated ideological differences. With new communities constantly emerging and serving as ideological birthplaces, battlegrounds, and bunkers, it is critical to develop a framework for understanding worldviews and ideological distinction. Most existing work, however, takes a predetermined view based on political polarization: the “right vs. left” dichotomy of U.S. politics. In reality, both political polarization – and worldviews more broadly – transcend one-dimensional difference, and deserve a more complete analysis. Extending the ability of word embedding models to capture the semantic and cultural characteristics of their training corpora, we propose a novel method for discovering the multifaceted ideological and worldview characteristics of communities. Using over 1B comments collected from the largest communities on Reddit.com representing ~40% of Reddit activity, we demonstrate the efficacy of this approach to uncover complex ideological differences across multiple axes of polarization.more » « less
An official website of the United States government

