skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Generative AI and Perceptual Harms: Who's Suspected of using LLMs?
Large language models (LLMs) are increasingly integrated into a variety of writing tasks. While these tools can help people by generating ideas or producing higher quality work, like many other AI tools they may risk causing a variety of harms, disproportionately burdening historically marginalized groups. In this work, we introduce and evaluate perceptual harm, a term for the harm caused to users when others perceive or suspect them of using AI. We examined perceptual harms in three online experiments, each of which entailed human participants evaluating the profiles for fictional freelance writers. We asked participants whether they suspected the freelancers of using AI, the quality of their writing, and whether they should be hired. We found some support for perceptual harms against for certain demographic groups, but that perceptions of AI use negatively impacted writing evaluations and hiring outcomes across the board.  more » « less
Award ID(s):
1901151
PAR ID:
10562904
Author(s) / Creator(s):
; ;
Publisher / Repository:
arXiv
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, Kirchenbauer et. al. use a novel watermarking technology to watermark the output of large language models (LLMs) like ChatGP, which is often in the form of AI-generated text, and mitigate the harms associated with the increasing usage of these technologies. They note some of the capabilities of these LLM models as writing documents, creating executable code, and answering questions, often with human-like capabilities. In addition, they list some of the harms as social engineering and election manipulation campaigns that exploit automated bots on social media platforms, creation of fake news and web content, and use of AI systems for cheating onacademic writing and coding assignments. As for implications for policy makers, this technology can be utilized as a means to regulate and oversee the use of these LLMs on all public and social fronts where their AI-generated text output could pose a potential harm, such as those listed by the authors. (Methods and Metrics, watermarking LLM output) 
    more » « less
  2. If large language models like GPT-3 preferably produce a particular point of view, they may influence people’s opinions on an unknown scale. This study investigates whether a language-model-powered writing assistant that generates some opinions more often than others impacts what users write – and what they think. In an online experiment, we asked participants (N=1,506) to write a post discussing whether social media is good for society. Treatment group participants used a language-model-powered writing assistant configured to argue that social media is good or bad for society. Participants then completed a social media attitude survey, and independent judges (N=500) evaluated the opinions expressed in their writing. Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. We discuss the wider implications of our results and argue that the opinions built into AI language technologies need to be monitored and engineered more carefully. 
    more » « less
  3. AI language technologies increasingly assist and expand human communication. While AI-mediated communication reduces human effort, its societal consequences are poorly understood. In this study, we investigate whether using an AI writing assistant in personal self-presentation changes how people talk about themselves. In an online experiment, we asked participants (N=200) to introduce themselves to others. An AI language assistant supported their writing by suggesting sentence completions. The language model generating suggestions was fine-tuned to preferably suggest either interest, work, or hospitality topics. We evaluate how the topic preference of a language model affected users’ topic choice by analyzing the topics participants discussed in their self-presentations. Our results suggest that AI language technologies may change the topics their users talk about. We discuss the need for a careful debate and evaluation of the topic priors built into AI language technologies. 
    more » « less
  4. Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups. 
    more » « less
  5. Nah, Fiona (Ed.)
    Misinformation about the coronavirus disease of 2019 (COVID-19) health crisis has been widespread on social media and caused various types of harms in society. While some researchers have investigated the way in which people perceive misinformation harm in crises, little research has systematically examined harms from health-related misinformation. In order to address this gap, we focus on non-comparative and comparative harm perceptions of the affected community in the COVID-19 pandemic context. We examine non-comparative harms (which component harms and contextual harms reflect) and comparative harms (which counter-contextual harms reflect) in order to understand harm perceptions. We also investigate how harm perception varies based on COVID-19 victimization experience. We used a professional survey company named Cint to collect data using a scenario-based survey with 343 participants. We extract various findings such as how contextual features shape perceived harms and reveal the scenarios in which COVID-19 victims perceive higher contextual harms but lower counter-contextual harms. We also examine how corrective actions of social media shape harm perceptions. 
    more » « less