skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Not Just a Preference: Reducing Biased Decision-making on Dating Websites
As dating websites are becoming an essential part of how people meet intimate and romantic partners, it is vital to design these systems to be resistant to, or at least do not amplify, bias and discrimination. Instead, the results of our online experiment with a simulated dating website, demonstrate that popular dating website design choices, such as the user of the swipe interface (swiping in one direction to indicate a like and in the other direction to express a dislike) and match scores, resulted in people racially biases choices even when they explicitly claimed not to have considered race in their decision-making. This bias was significantly reduced when the order of information presentation was reversed such that people first saw substantive profile information related to their explicitly-stated preferences before seeing the profile name and photo. These results indicate that currently-popular design choices amplify people's implicit biases in their choices of potential romantic partners, but the effects of the implicit biases can be reduced by carefully redesign the dating website interfaces.  more » « less
Award ID(s):
2107391
PAR ID:
10366303
Author(s) / Creator(s):
;
Date Published:
Journal Name:
CHI Conference on Human Factors in Computing Systems (CHI ’22)
Page Range / eLocation ID:
1 to 14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Security design choices often fail to take into account users' social context. Our work is among the first to examine security behavior in romantic relationships. We surveyed 195 people on Amazon Mechanical Turk about their relationship status and account sharing behavior for a cross-section of popular websites and apps (e.g., Netflix, Amazon Prime). We examine differences in account sharing behavior at different stages in a relationship and for people in different age groups and income levels. We also present a taxonomy of sharing motivations and behaviors based on the iterative coding of open-ended responses. Based on this taxonomy, we present design recommendations to support end users in three relationship stages: when they start sharing access with romantic partners; when they are maintaining that sharing; and when they decide to stop. Our findings contribute to the field of usable privacy and security by enhancing our understanding of security and privacy behaviors and needs in intimate social relationships. 
    more » « less
  2. People benefit immensely when they have close relationship partners who are instrumental (i.e., helpful) to their goal pursuit. However, little is known about whatmotivatespartners’ continued instrumentality. Research on gratitude led us to examine whether, when, and why receiving expressions of gratitude for one’s instrumentality would increase people’s intentions to be instrumental to their romantic partner’s goal(s) in the future (future instrumentality intentions [FIIs]). In a correlational study (Study 1) and two experiments in which we manipulated expressed gratitude (Studies 2 and 3), gratitude receipt positively predicted FIIs. This finding persisted regardless of whether partners achieved their goal (Study 3). We identify potential mechanisms and show that gratitude receipt is particularly important for boosting FIIs among people in lower (vs. higher) quality relationships. These findings serve as a foundation for research examining antecedents to instrumentality and considering long-term consequences of gratitude receipt for support processes in romantic relationships. 
    more » « less
  3. Our daily observations tell us that the delivery of social sentiments and emotions differs between strangers and romantic partners. This work explores how relationship status influences our delivery and perception of social touches and emotions, by evaluating the physics of contact interactions. In a study with human participants, strangers and romantically involved touchers delivered emotional messages to receivers’ forearms. Physical contact interactions were measured using a customized 3D tracking system. The results indicate that strangers and romantic receivers recognize emotional messages with similar accuracy, but with higher levels of valence and arousal between romantic partners. Further investigation into the contact interactions which underlie the higher levels of valence and arousal reveals that a toucher tunes their strategy with their romantic partner. For example, when stroking, romantic touchers use velocities preferential to C-tactile afferents, and maintain contact for longer durations with larger contact areas. Notwithstanding, while we show that relationship intimacy influences the deployment of touch strategies, such impact is relatively subtle compared to distinctions between gestures, emotional messages, and individual preferences. 
    more » « less
  4. Mental health stigma manifests differently for different genders, often being more associated with women and overlooked with men. Prior work in NLP has shown that gendered mental health stigmas are captured in large language models (LLMs). However, in the last year, LLMs have changed drastically: newer, generative models not only require different methods for measuring bias, but they also have become widely popular in society, interacting with millions of users and increasing the stakes of perpetuating gendered mental health stereotypes. In this paper, we examine gendered mental health stigma in GPT3.5-Turbo, the model that powers OpenAI’s popular ChatGPT. Building off of prior work, we conduct both quantitative and qualitative analyses to measure GPT3.5-Turbo’s bias between binary genders, as well as to explore its behavior around non-binary genders, in conversations about mental health. We find that, though GPT3.5-Turbo refrains from explicitly assuming gender, it still contains implicit gender biases when asked to complete sentences about mental health, consistently preferring female names over male names. Additionally, though GPT3.5-Turbo shows awareness of the nuances of non-binary people’s experiences, it often over-fixates on non-binary gender identities in free-response prompts. Our preliminary results demonstrate that while modern generative LLMs contain safeguards against blatant gender biases and have progressed in their inclusiveness of non-binary identities, they still implicitly encode gendered mental health stigma, and thus risk perpetuating harmful stereotypes in mental health contexts. 
    more » « less
  5. AI systems have been known to amplify biases in real-world data. Explanations may help human-AI teams address these biases for fairer decision-making. Typically, explanations focus on salient input features. If a model is biased against some protected group, explanations may include features that demonstrate this bias, but when biases are realized through proxy features, the relationship between this proxy feature and the protected one may be less clear to a human. In this work, we study the effect of the presence of protected and proxy features on participants’ perception of model fairness and their ability to improve demographic parity over an AI alone. Further, we examine how different treatments—explanations, model bias disclosure and proxy correlation disclosure—affect fairness perception and parity. We find that explanations help people detect direct but not indirect biases. Additionally, regardless of bias type, explanations tend to increase agreement with model biases. Disclosures can help mitigate this effect for indirect biases, improving both unfairness recognition and decision-making fairness. We hope that our findings can help guide further research into advancing explanations in support of fair human-AI decision-making. 
    more » « less