skip to main content


Title: Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems
Abstract

The degree of polarization in many societies has become a pressing concern in media studies. Typically, it is argued that the internet and social media have created more media producers than ever before, allowing individual, biased media consumers to expose themselves only to what already confirms their beliefs, leading to polarized echo-chambers that further deepen polarization. This work introduces extensions to the recent Cognitive Cascades model of Rabb et al. to study this dynamic, allowing for simulation of information spread between media and networks of variably biased citizens. Our results partially confirm the above polarization logic, but also reveal several important enabling conditions for polarization to occur: (1) the distribution of media belief must be more polarized than the population; (2) the population must be at least somewhat persuadable to changing their belief according to new messages they hear; and finally, (3) the media must statically continue to broadcast more polarized messages rather than, say, adjust to appeal more to the beliefs of their current subscribers. Moreover, and somewhat counter-intuitively, under these conditions we find that polarization is more likely to occur when media consumers are exposed to more diverse messages, and that polarization occurred most often when there were low levels of echo-chambers and fragmentation. These results suggest that polarization is not simply due to biased individuals responding to an influx of media sources in the digital age, but also a consequence of polarized media conditions within an information ecosystem that supports more diverse exposure than is typically thought.

 
more » « less
Award ID(s):
1934553
NSF-PAR ID:
10473454
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Applied Network Science
Volume:
8
Issue:
1
ISSN:
2364-8228
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Although scientists agree that climate change is anthropogenic, differing interpretations of evidence in a highly polarized sociopolitical environment impact how individuals perceive climate change. While prior work suggests that individuals experience climate change through local conditions, there is a lack of consensus on how personal experience with extreme precipitation may alter public opinion on climate change. We combine high-resolution precipitation data at the zip-code level with nationally representative public opinion survey results (n= 4008) that examine beliefs in climate change and the perceived cause. Our findings support relationships between well-established value systems (i.e., partisanship, religion) and socioeconomic status with individual opinions of climate change, showing that these values are influential in opinion formation on climate issues. We also show that experiencing characteristics of atypical precipitation (e.g., more variability than normal, increasing or decreasing trends, or highly recurring extreme events) in a local area are associated with increased belief in anthropogenic climate change. This suggests that individuals in communities that experience greater atypical precipitation may be more accepting of messaging and policy strategies directly aimed at addressing climate change challenges. Thus, communication strategies that leverage individual perception of atypical precipitation at the local level may help tap into certain “experiential” processing methods, making climate change feel less distant. These strategies may help reduce polarization and motivate mitigation and adaptation actions.

    Significance Statement

    Public acceptance for anthropogenic climate change is hindered by how related issues are presented, diverse value systems, and information-processing biases. Personal experiences with extreme weather may act as a salient cue that impacts individuals’ perceptions of climate change. We couple a large, nationally representative public opinion dataset with station precipitation data at the zip-code level in the United States. Results are nuanced but suggest that anomalous and variable precipitation in a local area may be interpreted as evidence for anthropogenic climate change. So, relating atypical local precipitation conditions to climate change may help tap into individuals’ experiential processing, sidestep polarization, and tailor communications at the local level.

     
    more » « less
  2. null (Ed.)
    The coronavirus disease 2019 (COVID-19) pandemic has created substantial challenges for public health officials who must communicate pandemic-related risks and recommendations to the public. Their efforts have been further hampered by the politicization of the pandemic, including media outlets that question the seriousness and necessity of protective actions. The availability of highly politicized news from online platforms has led to concerns about the notion of ‘‘echo chambers,’’ whereby users are exposed only to information that conforms to and reinforces their existing beliefs. Using a sample of 5,000 US residents, we explored their information-seeking tendencies, reliance on conservative and liberal online media, risk perceptions, and mitigation behaviors. The results of our study suggest that risk perceptions may vary across preferences for conservative or liberal bias; however, our results do not support differences in the mitigation behavior across patterns of media use. Further, our findings do not support the notion of echo chambers, but rather suggest that people with lower information-seeking behavior may be more strongly influenced by politicized COVID-19 news. Risk estimates converge at higher levels of information seeking, suggesting that high information seekers consume news from sources across the political spectrum. These results are discussed in terms of their theoretical implications for the study of online echo chambers and their practical implications for public health officials and emergency managers. 
    more » « less
  3. The abundance of media options is a central feature of today’s information environment. Many accounts, often based on analysis of desktop-only news use, suggest that this increased choice leads to audience fragmentation, ideological segregation, and echo chambers with no cross-cutting exposure. Contrary to many of those claims, this paper uses observational multiplatform data capturing both desktop and mobile use to demonstrate that coexposure to diverse news is on the rise, and that ideological self-selection does not explain most of that coexposure. We show that mainstream media outlets offer the common ground where ideologically diverse audiences converge online, though our analysis also reveals that more than half of the US online population consumes no online news, underlining the risk of increased information inequality driven by self-selection along lines of interest. For this study, we use an unprecedented combination of observed data from the United States comprising a 5-y time window and involving tens of thousands of panelists. Our dataset traces news consumption across different devices and unveils important differences in news diets when multiplatform or desktop-only access is used. We discuss the implications of our findings for how we think about the current communication environment, exposure to news, and ongoing attempts to limit the effects of misinformation. 
    more » « less
  4. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less
  5. Abstract Background

    Despite the diverse student population in the USA, the labor force in Science, Technology, Engineering, and Mathematics (STEM) does not reflect this reality. While restrictive messages about who belongs in STEM likely discourage students, particularly female and minoritized students, from entering these fields, extant research on this topic is typically focused on the negative impact of stereotypes regarding math ability, or the existence of stereotypes about the physical appearance of scientists. Instead, this study builds on the limited body of research that captures a more comprehensive picture of students’ views of scientists, including not only the type of work that they do but also the things that interest them. Specifically, utilizing a sample of approximately 1000 Black and Latinx adolescents, the study employs an intersectional lens to examine whether the prevalence of counter-stereotypical views of scientists, and the association such views have on subsequent intentions to pursue STEM college majors, varies among students from different gender and racial/ethnic groups (e.g., Black female students, Latinx male students).

    Results

    While about half of Black and Latinx students reported holding counter-stereotypical beliefs about scientists, this is significantly more common among female students of color, and among Black female students in particular. Results from logistic regression models indicate that, net of control variables, holding counter-stereotypical beliefs about scientists predicts both young men’s and women’s intentions to major in computer science and engineering, but not intentions to major in either physical science or mathematics. Additionally, among Black and Latinx male students, counter-stereotypical perceptions of scientists are related to a higher likelihood of intending to major in biological sciences.

    Conclusions

    The results support the use of an intersectional approach to consider how counter-stereotypical beliefs about scientists differ across gender and racial/ethnic groups. Importantly, the results also suggest that among Black and Latinx youth, for both female and male students, holding counter-stereotypical beliefs promotes intentions to enter particular STEM fields in which they are severely underrepresented. Implications of these findings and directions for future research, specifically focusing on minoritized students, which are often left out in this body of literature, are discussed.

     
    more » « less