skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: Search as News Curator: The Role of Google in Shaping Attention to News Information
This paper presents an algorithm audit of the Google Top Stories box, a prominent component of search engine results and powerful driver of traffic to news publishers. As such, it is important in shaping user attention towards news outlets and topics. By analyzing the number of appearances of news article links we contribute a series of novel analyses that provide an in-depth characterization of news source diversity and its implications for attention via Google search. We present results indicating a considerable degree of source concentration (with variation among search terms), a slight exaggeration in the ideological skew of news in comparison to a baseline, and a quantification of how the presentation of items translates into traffic and attention for publishers. We contribute insights that underscore the power that Google wields in exposing users to diverse news information, and raise important questions and opportunities for future work on algorithmic news curation.  more » « less
Award ID(s):
1717330
NSF-PAR ID:
10096341
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 15
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. When one searches for political candidates on Google, a panel composed of recent news stories, known as Top stories, is commonly shown at the top of the search results page. These stories are selected by an algorithm that chooses from hundreds of thousands of articles published by thousands of news publishers. In our previous work, we identified 56 news sources that contributed 2/3 of all Top stories for 30 political candidates running in the primaries of 2020 US Presidential Election. In this paper, we survey US voters to elicit their familiarity and trust with these 56 news outlets. We find that some of the most frequent outlets are not familiar to all voters (e.g. The Hill or Politico), or particularly trusted by voters of any political stripes (e.g. Washington Examiner or The Daily Beast). Why then, are such sources shown so frequently in Top stories? We theorize that Google is sampling news articles from sources with different political leanings to offer a balanced coverage. This is reminiscent of the so-called “fairness doctrine” (1949-1987) policy in the United States that required broadcasters (radio or TV stations) to air contrasting views about controversial matters. Because there are fewer right-leaning publications than center or left-leaning ones, in order to maintain this “fair” balance, hyper-partisan far-right news sources of low trust receive more visibility than some news sources that are more familiar to and trusted by the public. 
    more » « less
  2. De Cristofaro, Emiliano ; Nakov, Preslav (Ed.)
    Google’s reviewed claims feature was an early attempt to incorporate additional credibility signals from fact-checking onto the search results page. The feature, which appeared when users searched for the name of a subset of news publishers, was criticized by dozens of publishers for its errors and alleged anticonservative bias. By conducting an audit of news publisher search results and focusing on the critiques of publishers, we find that there is a lack of consensus between fact-checking ecosystem stakeholders that may be important to address in future iterations of public facing fact-checking tools. In particular, we find that a lack of transparency coupled with a lack of consensus on what makes a fact-check relevant to a news article led to the breakdown of reviewed claims. 
    more » « less
  3. null (Ed.)
    Headlines play an important role in both news audiences' attention decisions online and in news organizations’ efforts to attract that attention. A large body of research focuses on developing generally applicable heuristics for more effective headline writing. In this work, we measure the importance of a number of theoretically motivated textual features to headline performance. Using a corpus of hundreds of thousands of headline A/B tests run by hundreds of news publishers, we develop and evaluate a machine-learned model to predict headline testing outcomes. We find that the model exhibits modest performance above baseline and further estimate an empirical upper bound for such content-based prediction in this domain, indicating an important role for non-content-based factors in test outcomes. Together, these results suggest that any particular headline writing approach has only a marginal impact, and that understanding reader behavior and headline context are key to predicting news attention decisions. 
    more » « less
  4. null (Ed.)
    In this paper, we provide a large-scale analysis of the display ad ecosystem that supports low-credibility and traditional news sites, with a particular focus on the relationship between retailers and news producers. We study this relationship from both the retailer and news producer perspectives. First, focusing on the retailers, our work reveals high-profile retailers that are frequently advertised on low-credibility news sites, including those that are more likely to be advertised on low-credibility news sites than traditional news sites. Additionally, despite high-profile retailers having more resources and incentive to dissociate with low-credibility news publishers, we surprisingly do not observe a strong relationship between retailer popularity and advertising intensity on low-credibility news sites. We also do not observe a significant difference across different market sectors. Second, turning to the publishers, we characterize how different retailers are contributing to the ad revenue stream of low-credibility news sites. We observe that retailers who are among the top-10K websites on the Internet account for a quarter of all ad traffic on low-credibility news sites. Nevertheless, we show that low-credibility news sites are already becoming less reliant on popular retailers over time, highlighting the dynamic nature of the low-credibility news ad ecosystem. 
    more » « less
  5. This study, based on data collected from a representative sample of adults in the United States, explores the social cognitive variables that motivated Americans to validate rumours on social media about Hurricanes Harvey and Irma, both of which struck in August/September 2017. The results indicate that risk perception and negative emotions are positively related to systematic processing of relevant risk information, and that systematic processing is significantly related to rumour validation through search engines such as Google. In contrast, trust in information about the hurricane is significantly related to validation through official sources, such as FEMA (Federal Emergency Management Agency), and major news outlets such asThe New York Times. Trust in information is also significantly related to systematic processing of risk information. The findings of this study suggest that ordinary citizens may be motivated to validate rumours on social media, which is an increasingly important issue in contemporary societies.

     
    more » « less