Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available April 25, 2026
-
Free, publicly-accessible full text available April 25, 2026
-
Free, publicly-accessible full text available November 11, 2025
-
Social computing platforms facilitate interpersonal harms that manifest across online and physical realms such as sexual violence between online daters and sexual grooming through social media. Risk detection AI has emerged as an approach to preventing such harms, however a myopic focus on computational performance has been criticized in HCI literature for failing to consider how users should interact with risk detection AI to stay safe. In this paper we report an interview study with woman-identifying online daters (n=20) about how they envision interacting with risk detection AI and how risk detection models can be designed pursuant to such interactions. In accordance with this goal, we engaged women in risk detection model building exercises to build their own risk detection models. Findings show that women anticipate interacting with risk detection AI to augment - not replace - their personal risk assessment strategies. They likewise designed risk detection models to amplify their subjective and admittedly biased indicators of risk. Design implications involve the notion of personalizable risk detection models, but also ethical concerns around perpetuating problematic stereotypes associated with risk.more » « lessFree, publicly-accessible full text available November 7, 2025
An official website of the United States government
