Abstract Social media has been transforming political communication dynamics for over a decade. Here using nearly a billion tweets, we analyse the change in Twitter’s news media landscape between the 2016 and 2020 US presidential elections. Using political bias and fact-checking tools, we measure the volume of politically biased content and the number of users propagating such information. We then identify influencers—users with the greatest ability to spread news in the Twitter network. We observe that the fraction of fake and extremely biased content declined between 2016 and 2020. However, results show increasing echo chamber behaviours and latent ideological polarization across the two elections at the user and influencer levels. 
                        more » 
                        « less   
                    
                            
                            Curating Quality? How Twitter’s Timeline Algorithm Treats Different Types of News
                        
                    
    
            This article explores how Twitter’s algorithmic timeline influences exposure to different types of external media. We use an agent-based testing method to compare chronological timelines and algorithmic timelines for a group of Twitter agents that emulated real-world archetypal users. We first find that algorithmic timelines exposed agents to external links at roughly half the rate of chronological timelines. Despite the reduced exposure, the proportional makeup of external links remained fairly stable in terms of source categories (major news brands, local news, new media, etc.). Notably, however, algorithmic timelines slightly increased the proportion of “junk news” websites in the external link exposures. While our descriptive evidence does not fully exonerate Twitter’s algorithm, it does characterize the algorithm as playing a fairly minor, supporting role in shifting media exposure for end users, especially considering upstream factors that create the algorithm’s input—factors such as human behavior, platform incentives, and content moderation. We conclude by contextualizing the algorithm within a complex system consisting of many factors that deserve future research attention. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1717330
- PAR ID:
- 10547366
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Social Media + Society
- Volume:
- 7
- Issue:
- 3
- ISSN:
- 2056-3051
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            A recent article in Science by Guess et al. estimated the effect of Facebook's news feed algorithm on exposure to misinformation and political information among Facebook users. However, its reporting and conclusions did not account for a series of temporary emergency changes to Facebook's news feed algorithm in the wake of the 2020 U.S. presidential election that were designed to diminish the spread of voter-fraud misinformation. Here, we demonstrate that these emergency measures systematically reduced the amount of misinformation in the control group of the study, which was using the news feed algorithm. This issue may have led readers to misinterpret the results of the study and to conclude that the Facebook news feed algorithm used outside of the study period mitigates political misinformation as compared to reverse chronological feed.more » « less
- 
            This paper presents an algorithm audit of the Google Top Stories box, a prominent component of search engine results and powerful driver of traffic to news publishers. As such, it is important in shaping user attention towards news outlets and topics. By analyzing the number of appearances of news article links we contribute a series of novel analyses that provide an in-depth characterization of news source diversity and its implications for attention via Google search. We present results indicating a considerable degree of source concentration (with variation among search terms), a slight exaggeration in the ideological skew of news in comparison to a baseline, and a quantification of how the presentation of items translates into traffic and attention for publishers. We contribute insights that underscore the power that Google wields in exposing users to diverse news information, and raise important questions and opportunities for future work on algorithmic news curation.more » « less
- 
            How social media platforms could fairly conduct content moderation is gaining attention from society at large. Researchers from HCI and CSCW have investigated whether certain factors could affect how users perceive moderation decisions as fair or unfair. However, little attention has been paid to unpacking or elaborating on the formation processes of users' perceived (un)fairness from their moderation experiences, especially users who monetize their content. By interviewing 21 for-profit YouTubers (i.e., video content creators), we found three primary ways through which participants assess moderation fairness, including equality across their peers, consistency across moderation decisions and policies, and their voice in algorithmic visibility decision-making processes. Building upon the findings, we discuss how our participants' fairness perceptions demonstrate a multi-dimensional notion of moderation fairness and how YouTube implements an algorithmic assemblage to moderate YouTubers. We derive translatable design considerations for a fairer moderation system on platforms affording creator monetization.more » « less
- 
            null (Ed.)Algorithmic personalization of news and social media content aims to improve user experience; however, there is evidence that this filtering can have the unintended side effect of creating homogeneous "filter bubbles," in which users are over-exposed to ideas that conform with their preexisting perceptions and beliefs. In this paper, we investigate this phenomenon in the context of political news recommendation algorithms, which have important implications for civil discourse. We first collect and curate a collection of over 900K news articles from 41 sources annotated by topic and partisan lean. We then conduct simulation studies to investigate how different algorithmic strategies affect filter bubble formation. Drawing on Pew studies of political typologies, we identify heterogeneous effects based on the user's pre-existing preferences. For example, we find that i) users with more extreme preferences are shown less diverse content but have higher click-through rates than users with less extreme preferences, ii) content-based and collaborative-filtering recommenders result in markedly different filter bubbles, and iii) when users have divergent views on different topics, recommenders tend to have a homogenization effect.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
