skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2211896

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Despite the absence of consent being a defining quality of computer-mediated sexual harm, there is an absence of consent models that explicitly prescribe how consent to sexual activity should be asked for, given, and denied when mediated by technology. HCI literature has advocated for the adoption of affirmative consent (''yes means yes''); however, this model was created in 1991 without consideration for computers and has been historically underutilized. Through a speculative study of VR dating with 16 women and LGBTQIA+ stakeholders, we contribute archetypes of four new computer-mediated consent models for sexual activity. These include 1) visual consent through AR/VR rather than verbal dialogue, 2) agent-mediated consent where AI agents communicate consent on behalf of sexual partners, 3) a two-layer consent process called consent-to-stimulus, and 4) environmental consent where virtual environments scaffold behaviors that can(not) be consented to. We conclude by reflecting on which models could potentially supplant affirmative consent to better mitigate computer-mediated sexual violence and harassment. Content warning: This paper discusses forms of sexual violence including rape. 
    more » « less
    Free, publicly-accessible full text available October 18, 2026
  2. Free, publicly-accessible full text available April 25, 2026
  3. Free, publicly-accessible full text available April 25, 2026
  4. Social computing platforms facilitate interpersonal harms that manifest across online and physical realms such as sexual violence between online daters and sexual grooming through social media. Risk detection AI has emerged as an approach to preventing such harms, however a myopic focus on computational performance has been criticized in HCI literature for failing to consider how users should interact with risk detection AI to stay safe. In this paper we report an interview study with woman-identifying online daters (n=20) about how they envision interacting with risk detection AI and how risk detection models can be designed pursuant to such interactions. In accordance with this goal, we engaged women in risk detection model building exercises to build their own risk detection models. Findings show that women anticipate interacting with risk detection AI to augment - not replace - their personal risk assessment strategies. They likewise designed risk detection models to amplify their subjective and admittedly biased indicators of risk. Design implications involve the notion of personalizable risk detection models, but also ethical concerns around perpetuating problematic stereotypes associated with risk. 
    more » « less
  5. The design of social matching and dating apps has changed continually through the years, marked notably by a shift to mobile devices, and yet user safety has not historically been a driver of design despite mounting evidence of sexual and other harms. This paper presents a participatory design study with women-a demographic at disproportionate risk of harm through app-use-about how mobile social matching apps could be designed to foreground their safety. Findings indicate that participants want social matching apps to augment women's abilities for self-protection, reflected in three new app roles: 1) the cloaking device, through which the social matching app helps women dynamically manage visibility to geographically nearby users, 2) the informant, through which the app helps women predict risk of harm associated with a recommended social opportunity, and 3) the guardian, through which the app monitors a user's safety during face-to-face meetings and augments their response to risk. 
    more » « less