skip to main content


Search for: All records

Award ID contains: 2227488

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We propose a simple yet effective solution to tackle the often-competing goals of fairness and utility in classification tasks. While fairness ensures that the model's predictions are unbiased and do not discriminate against any particular group or individual, utility focuses on maximizing the model's predictive performance. This work introduces the idea of leveraging aleatoric uncertainty (e.g., data ambiguity) to improve the fairness-utility trade-off. Our central hypothesis is that aleatoric uncertainty is a key factor for algorithmic fairness and samples with low aleatoric uncertainty are modeled more accurately and fairly than those with high aleatoric uncertainty. We then propose a principled model to improve fairness when aleatoric uncertainty is high and improve utility elsewhere. Our approach first intervenes in the data distribution to better decouple aleatoric uncertainty and epistemic uncertainty. It then introduces a fairness-utility bi-objective loss defined based on the estimated aleatoric uncertainty. Our approach is theoretically guaranteed to improve the fairness-utility trade-off. Experimental results on both tabular and image datasets show that the proposed approach outperforms state-of-the-art methods w.r.t. the fairness-utility trade-off and w.r.t. both group and individual fairness metrics. This work presents a fresh perspective on the trade-off between utility and algorithmic fairness and opens a key avenue for the potential of using prediction uncertainty in fair machine learning. 
    more » « less
    Free, publicly-accessible full text available October 21, 2024
  2. Cyberbullying has become a prominent risk for youth and an increasing concern for parents. To help parents reduce their child’s cyberbullying risk, anti-bullying apps (ABAs)—mobile applications for identifying and preventing instances of cyberbullying—have been developed in recent years. Given that ABAs are an emerging technology, limited research has been conducted to understand the factors predicting parents’ intentions to use them. Drawing on three interdisciplinary theoretical frameworks, a sample of parents in the U.S. recruited through Amazon Mechanical Turk completed an online survey to assess parents’ knowledge of, attitudes about, and intentions to use ABAs. Participants also rated the importance of a range of ABA functions and provided information about their child’s social media use and bullying history. A series of path analyses revealed that the importance parents placed on an app’s ability to provide information about their child’s cyberbullying risk predicted more positive attitudes toward ABAs and greater perceived usefulness of them. Stronger intentions to use ABAs were predicted by greater cyberbullying concern, greater importance of social recommendations, greater perceived usefulness, more positive attitudes toward the apps, and lower ratings of the importance of ease of use. These findings shed light on the factors predicting parents’ intentions to use ABAs and the app features they view as most important. Crucial directions for future research and implications for antibullying efforts are discussed. 
    more » « less
    Free, publicly-accessible full text available September 1, 2024
  3. A recent surge of users migrating from Twitter to alternative platforms, such as Mastodon, raised questions regarding what migration patterns are, how different platforms impact user behaviors, and how migrated users settle in the migration process. In this study, we elaborate how we investigate these questions by collecting data over 10,000 users who migrated from Twitter to Mastodon within the first ten weeks following Elon Musk's acquisition of Twitter. Our research is structured in three primary steps. First, we develop algorithms to extract and analyze migration patters. Second, by leveraging behavioral analysis, we examine the distinct architectures of Twitter and Mastodon to learn how different platforms shape user behaviors on each platform. Last, we determine how particular behavioral factors influence users to stay on Mastodon. We share our findings of user migration, insights, and lessons learned from the user behavior study. 
    more » « less
    Free, publicly-accessible full text available August 11, 2024
  4. A recent surge of users migrating from Twitter to alternative platforms, such as Mastodon, raised questions regarding what migration patterns are, how different platforms impact user behaviors, and how migrated users settle in the migration process. In this study, we elaborate how we investigate these questions by collecting data over 10,000 users who migrated from Twitter to Mastodon within the first ten weeks following Elon Musk's acquisition of Twitter. Our research is structured in three primary steps. First, we develop algorithms to extract and analyze migration patters. Second, by leveraging behavioral analysis, we examine the distinct architectures of Twitter and Mastodon to learn how different platforms shape user behaviors on each platform. Last, we determine how particular behavioral factors influence users to stay on Mastodon. We share our findings of user migration, insights, and lessons learned from the user behavior study. 
    more » « less
    Free, publicly-accessible full text available August 11, 2024
  5. Recent studies have documented increases in anti-Asian hate throughout the COVID-19 pandemic. Yet relatively little is known about how anti-Asian content on social media, as well as positive messages to combat the hate, have varied over time. In this study, we investigated temporal changes in the frequency of anti-Asian and counter-hate messages on Twitter during the first 16 months of the COVID-19 pandemic. Using the Twitter Data Collection Application Programming Interface, we queried all tweets from January 30, 2020 to April 30, 2021 that contained specific anti-Asian (e.g., #chinavirus, #kungflu) and counter-hate (e.g., #hateisavirus) keywords. From this initial data set, we extracted a random subset of 1,000 Twitter users who had used one or more anti-Asian or counter-hate keywords. For each of these users, we calculated the total number of anti-Asian and counter-hate keywords posted each month. Latent growth curve analysis revealed that the frequency of anti-Asian keywords fluctuated over time in a curvilinear pattern, increasing steadily in the early months and then decreasing in the later months of our data collection. In contrast, the frequency of counter-hate keywords remained low for several months and then increased in a linear manner. Significant between-user variability in both anti-Asian and counter-hate content was observed, highlighting individual differences in the generation of hate and counter-hate messages within our sample. Together, these findings begin to shed light on longitudinal patterns of hate and counter-hate on social media during the COVID-19 pandemic. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  6. Cyberbullying has become increasingly prevalent, particularly on social media. There has also been a steady rise in cyberbullying research across a range of disciplines. Much of the empirical work from computer science has focused on developing machine learning models for cyberbullying detection. Whereas machine learning cyberbullying detection models can be improved by drawing on psychological theories and perspectives, there is also tremendous potential for machine learning models to contribute to a better understanding of psychological aspects of cyberbullying. In this paper, we discuss how machine learning models can yield novel insights about the nature and defining characteristics of cyberbullying and how machine learning approaches can be applied to help clinicians, families, and communities reduce cyberbullying. Specifically, we discuss the potential for machine learning models to shed light on the repetitive nature of cyberbullying, the imbalance of power between cyberbullies and their victims, and causal mechanisms that give rise to cyberbullying. We orient our discussion on emerging and future research directions, as well as the practical implications of machine learning cyberbullying detection models. 
    more » « less
  7. Increased social media use has contributed to the greater prevalence of abusive, rude, and offensive textual comments. Machine learning models have been developed to detect toxic comments online, yet these models tend to show biases against users with marginalized or minority identities (e.g., females and African Americans). Established research in debiasing toxicity classifiers often (1) takes a static or batch approach, assuming that all information is available and then making a one-time decision; and (2) uses a generic strategy to mitigate different biases (e.g., gender and racial biases) that assumes the biases are independent of one another. However, in real scenarios, the input typically arrives as a sequence of comments/words over time instead of all at once. Thus, decisions based on partial information must be made while additional input is arriving. Moreover, social bias is complex by nature. Each type of bias is defined within its unique context, which, consistent with intersectionality theory within the social sciences, might be correlated with the contexts of other forms of bias. In this work, we consider debiasing toxicity detection as a sequential decision-making process where different biases can be interdependent. In particular, we study debiasing toxicity detection with two aims: (1) to examine whether different biases tend to correlate with each other; and (2) to investigate how to jointly mitigate these correlated biases in an interactive manner to minimize the total amount of bias. At the core of our approach is a framework built upon theories of sequential Markov Decision Processes that seeks to maximize the prediction accuracy and minimize the bias measures tailored to individual biases. Evaluations on two benchmark datasets empirically validate the hypothesis that biases tend to be correlated and corroborate the effectiveness of the proposed sequential debiasing strategy. 
    more » « less
  8. Prejudice and hate directed toward Asian individuals has increased in prevalence and salience during the COVID-19 pandemic, with notable rises in physical violence. Concurrently, as many governments enacted stay-at-home mandates, the spread of anti-Asian content increased in online spaces, including social media. In the present study, we investigated temporal and geographical patterns in social media content relevant to anti-Asian prejudice during the COVID-19 pandemic. Using the Twitter Data Collection API, we queried over 13 million tweets posted between January 30, 2020, and April 30, 2021, for both negative (e.g., #kungflu) and positive (e.g., #stopAAPIhate) hashtags and keywords related to anti-Asian prejudice. In a series of descriptive analyses, we found differences in the frequency of negative and positive keywords based on geographic location. Using burst detection, we also identified distinct increases in negative and positive content in relation to key political tweets and events. These largely exploratory analyses shed light on the role of social media in the expression and proliferation of prejudice as well as positive responses online. 
    more » « less
  9. Machine learning algorithms typically assume that the training and test samples come from the same distributions, i.e., in-distribution. However, in open-world scenarios, streaming big data can be Out-Of-Distribution (OOD), rendering these algorithms ineffective. Prior solutions to the OOD challenge seek to identify invariant features across different training domains. The underlying assumption is that these invariant features should also work reasonably well in the unlabeled target domain. By contrast, this work is interested in the domain-specific features that include both invariant features and features unique to the target domain. We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not. Our approach uses the most confidently predicted samples identified by an OOD base model (teacher model) to train a new model (student model) that effectively adapts to the target domain. Empirical evaluations on benchmark datasets show that the performance is improved over the SOTA by ∼10-20%. 
    more » « less
  10. As online communication continues to become more prevalent, instances of cyberbullying have also become more common, particularly on social media sites. Previous research in this area has studied cyberbullying outcomes, predictors of cyberbullying victimization/perpetration, and computational detection models that rely on labeled datasets to identify the underlying patterns. However, there is a dearth of work examining the content of what is said when cyberbullying occurs and most of the available datasets include only basic la-bels (cyberbullying or not). This paper presents an annotated Instagram dataset with detailed labels about key cyberbullying properties, such as the content type, purpose, directionality, and co-occurrence with other phenomena, as well as demographic information about the individuals who performed the annotations. Additionally, results of an exploratory logistic regression analysis are reported to illustrate how new insights about cyberbullying and its automatic detection can be gained from this labeled dataset. 
    more » « less