This content will become publicly available on April 30, 2024
- NSF-PAR ID:
- Date Published:
- Journal Name:
- WWW '23: Proceedings of the ACM Web Conference 2023
- Page Range / eLocation ID:
- 1572 to 1583
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
null (Ed.)This article explores how Twitter’s algorithmic timeline influences exposure to different types of external media. We use an agent-based testing method to compare chronological timelines and algorithmic timelines for a group of Twitter agents that emulated real-world archetypal users. We first find that algorithmic timelines exposed agents to external links at roughly half the rate of chronological timelines. Despite the reduced exposure, the proportional makeup of external links remained fairly stable in terms of source categories (major news brands, local news, new media, etc.). Notably, however, algorithmic timelines slightly increased the proportion of “junk news” websites in the external link exposures. While our descriptive evidence does not fully exonerate Twitter’s algorithm, it does characterize the algorithm as playing a fairly minor, supporting role in shifting media exposure for end users, especially considering upstream factors that create the algorithm’s input—factors such as human behavior, platform incentives, and content moderation. We conclude by contextualizing the algorithm within a complex system consisting of many factors that deserve future research attention.more » « less
Ideological divisions in the United States have become increasingly prominent in daily communication. Accordingly, there has been much research on political polarization, including many recent efforts that take a computational perspective. By detecting political biases in a text document, one can attempt to discern and describe its polarity. Intuitively, the named entities (i.e., the nouns and the phrases that act as nouns) and hashtags in text often carry information about political views. For example, people who use the term “pro-choice” are likely to be liberal and people who use the term “pro-life” are likely to be conservative. In this paper, we seek to reveal political polarities in social-media text data and to quantify these polarities by explicitly assigning a polarity score to entities and hashtags. Although this idea is straightforward, it is difficult to perform such inference in a trustworthy quantitative way. Key challenges include the small number of known labels, the continuous spectrum of political views, and the preservation of both a polarity score and a polarity-neutral semantic meaning in an embedding vector of words. To attempt to overcome these challenges, we propose the
Polarity-aware Embedding Multi-task learning ( PEM) model. This model consists of (1) a self-supervised context-preservation task, (2) an attention-based tweet-level polarity-inference task, and (3) an adversarial learning task that promotes independence between an embedding’s polarity component and its semantic component. Our experimental results demonstrate that our PEMmodel can successfully learn polarity-aware embeddings that perform well at tweet-level and account-level classification tasks. We examine a variety of applications—including a study of spatial and temporal distributions of polarities and a comparison between tweets from Twitter and posts from Parler—and we thereby demonstrate the effectiveness of our PEMmodel. We also discuss important limitations of our work and encourage caution when applying the PEMmodel to real-world scenarios.
Knowledge graph embeddings (KGE) have been extensively studied to embed large-scale relational data for many real-world applications. Existing methods have long ignored the fact many KGs contain two fundamentally different views: high-level ontology-view concepts and fine-grained instance-view entities. They usually embed all nodes as vectors in one latent space. However, a single geometric representation fails to capture the structural differences between two views and lacks probabilistic semantics towards concepts’ granularity. We propose Concept2Box, a novel approach that jointly embeds the two views of a KG using dual geometric representations. We model concepts with box embeddings, which learn the hierarchy structure and complex relations such as overlap and disjoint among them. Box volumes can be interpreted as concepts’ granularity. Different from concepts, we model entities as vectors. To bridge the gap between concept box embeddings and entity vector embeddings, we propose a novel vector-to-box distance metric and learn both embeddings jointly. Experiments on both the public DBpedia KG and a newly-created industrial KG showed the effectiveness of Concept2Box.more » « less
Learning the dependency relations among entities and the hierarchy formed by these relations by mapping entities into some order embedding space can effectively enable several important applications, including knowledge base completion and prerequisite relations prediction. Nevertheless, it is very challenging to learn a good order embedding due to the existence of partial ordering and missing relations in the observed data. Moreover, most application scenarios do not provide non-trivial negative dependency relation instances. We therefore propose a framework that performs dependency relation prediction by exploring both rich semantic and hierarchical structure information in the data. In particular, we propose several negative sampling strategies based on graph-specific centrality properties, which supplement the positive dependency relations with appropriate negative samples to effectively learn order embeddings. This research not only addresses the needs of automatically recovering missing dependency relations, but also unravels dependencies among entities using several real-world datasets, such as course dependency hierarchy involving course prerequisite relations, job hierarchy in organizations, and paper citation hierarchy. Extensive experiments are conducted on both synthetic and real-world datasets to demonstrate the prediction accuracy as well as to gain insights using the learned order embedding.more » « less
The COVID‐19 disease pandemic is one of the most pressing global health issues of our time. Nevertheless, responses to the pandemic exhibit a stark ideological divide, with political conservatives (versus liberals/progressives) expressing less concern about the virus and less behavioral compliance with efforts to combat it. Drawing from decades of research on the psychological underpinnings of ideology, in four studies (total
N= 4441) we examine the factors that contribute to the ideological gap in pandemic response—across domains including personality (e.g., empathic concern), attitudes (e.g., trust in science), information (e.g., COVID‐19 knowledge), vulnerability (e.g., preexisting medical conditions), demographics (e.g., education, income) and environment (e.g., local COVID‐19 infection rates). This work provides insight into the most proximal drivers of this ideological divide and also helps fill a long‐standing theoretical and empirical gap regarding how these various ideological differences shape responses to complex real‐world sociopolitical events. Among our key findings are the central role of attitude‐ and belief‐related factors (e.g., trust in science and trust in Trump)—and the relatively weaker influence of several domain‐general personality factors (empathic concern, disgust sensitivity, conspiratorial ideation). We conclude by considering possible explanations for these findings and their broader implications for our understanding of political ideology. Highlights
Stark ideological differences exist across a wide range of attitudinal and behavioral indices of pandemic response, with more conservative individuals reliably exhibiting less concern about the virus. These findings illustrate the extent to which the pandemic has become politicized.
A range of factors contribute to this ideological gap in pandemic response, but some are substantially more important than others.
Several factors that have received attention in public and academic discourse about the pandemic appear to contribute little, if at all, to the ideological divide. These include news following, scientific literacy, perceived social norms, and knowledge about the virus.
The most critical factors appear to be trust in scientists and trust in Trump, which further highlights the politicization of COVID‐19 and, importantly, the antagonistic nature of these two beliefs. Efforts to change and, especially, disentangle these two attitudes have the potential to be effective interventions.