skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Data Subjects' Perspectives on Emotion Artificial Intelligence Use in the Workplace: A Relational Ethics Lens
The workplace has experienced extensive digital transformation, in part due to artificial intelligence's commercial availability. Though still an emerging technology, emotional artificial intelligence (EAI) is increasingly incorporated into enterprise systems to augment and automate organizational decisions and to monitor and manage workers. EAI use is often celebrated for its potential to improve workers' wellbeing and performance as well as address organizational problems such as bias and safety. Workers subject to EAI in the workplace are data subjects whose data make EAI possible and who are most impacted by it. However, we lack empirical knowledge about data subjects' perspectives on EAI, including in the workplace. To this end, using a relational ethics lens, we qualitatively analyzed 395 U.S. adults' open-ended survey (partly representative) responses regarding the perceived benefits and risks they associate with being subjected to EAI in the workplace. While participants acknowledged potential benefits of being subject to EAI (e.g., employers using EAI to aid their wellbeing, enhance their work environment, reduce bias), a myriad of potential risks overshadowed perceptions of potential benefits. Participants expressed concerns regarding the potential for EAI use to harm their wellbeing, work environment and employment status, and create and amplify bias and stigma against them, especially the most marginalized (e.g., along dimensions of race, gender, mental health status, disability). Distrustful of EAI and its potential risks, participants anticipated conforming to (e.g., partaking in emotional labor) or refusing (e.g., quitting a job) EAI implementation in practice. We argue that EAI may magnify, rather than alleviate, existing challenges data subjects face in the workplace and suggest that some EAI-inflicted harms would persist even if concerns of EAI's accuracy and bias are addressed.  more » « less
Award ID(s):
2020872
PAR ID:
10437774
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
7
Issue:
CSCW1
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 38
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Workplace environments have a significant impact on worker performance, health, and well-being. With machine learning capabilities, artificial intelligence (AI) can be developed to automate individualized adjustments to work environments (e.g., lighting, temperature) and to facilitate healthier worker behaviors (e.g., posture). Worker perspectives on incorporating AI into office workspaces are largely unexplored. Thus, the purpose of this study was to explore office workers’ views on including AI in their office workspace. Six focus group interviews with a total of 45 participants were conducted. Interview questions were designed to generate discussion on benefits, challenges, and pragmatic considerations for incorporating AI into office settings. Sessions were audio-recorded, transcribed, and analyzed using an iterative approach. Two primary constructs emerged. First, participants shared perspectives related to preferences and concerns regarding communication and interactions with the technology. Second, numerous conversations highlighted the dualistic nature of a system that collects large amounts of data; that is, the potential benefits for behavior change to improve health and the pitfalls of trust and privacy. Across both constructs, there was an overarching discussion related to the intersections of AI with the complexity of work performance. Numerous thoughts were shared relative to future AI solutions that could enhance the office workplace. This study’s findings indicate that the acceptability of AI in the workplace is complex and dependent upon the benefits outweighing the potential detriments. Office worker needs are complex and diverse, and AI systems should aim to accommodate individual needs. 
    more » « less
  2. Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace. 
    more » « less
  3. Automatic emotion recognition (ER)-enabled wellbeing interventions use ER algorithms to infer the emotions of a data subject (i.e., a person about whom data is collected or processed to enable ER) based on data generated from their online interactions, such as social media activity, and intervene accordingly. The potential commercial applications of this technology are widely acknowledged, particularly in the context of social media. Yet, little is known about data subjects' conceptualizations of and attitudes toward automatic ER-enabled wellbeing interventions. To address this gap, we interviewed 13 US adult social media data subjects regarding social media-based automatic ER-enabled wellbeing interventions. We found that participants' attitudes toward automatic ER-enabled wellbeing interventions were predominantly negative. Negative attitudes were largely shaped by how participants compared their conceptualizations of Artificial Intelligence (AI) to the humans that traditionally deliver wellbeing support. Comparisons between AI and human wellbeing interventions were based upon human attributes participants doubted AI could hold: 1) helpfulness and authentic care; 2) personal and professional expertise; 3) morality; and 4) benevolence through shared humanity. In some cases, participants' attitudes toward automatic ER-enabled wellbeing interventions shifted when participants conceptualized automatic ER-enabled wellbeing interventions' impact on others, rather than themselves. Though with reluctance, a minority of participants held more positive attitudes toward their conceptualizations of automatic ER-enabled wellbeing interventions, citing their potential to benefit others: 1) by supporting academic research; 2) by increasing access to wellbeing support; and 3) through egregious harm prevention. However, most participants anticipated harms associated with their conceptualizations of automatic ER-enabled wellbeing interventions for others, such as re-traumatization, the spread of inaccurate health information, inappropriate surveillance, and interventions informed by inaccurate predictions. Lastly, while participants had qualms about automatic ER-enabled wellbeing interventions, we identified three development and delivery qualities of automatic ER-enabled wellbeing interventions upon which their attitudes toward them depended: 1) accuracy; 2) contextual sensitivity; and 3) positive outcome. Our study is not motivated to make normative statements about whether or how automatic ER-enabled wellbeing interventions should exist, but to center voices of the data subjects affected by this technology. We argue for the inclusion of data subjects in the development of requirements for ethical and trustworthy ER applications. To that end, we discuss ethical, social, and policy implications of our findings, suggesting that automatic ER-enabled wellbeing interventions imagined by participants are incompatible with aims to promote trustworthy, socially aware, and responsible AI technologies in the current practical and regulatory landscape in the US. 
    more » « less
  4. null (Ed.)
    Shift workers who are essential contributors to our society, face high risks of poor health and wellbeing. To help with their problems, we collected and analyzed physiological and behavioral wearable sensor data from shift working nurses and doctors, as well as their behavioral questionnaire data and their self-reported daily health and wellbeing labels, including alertness, happiness, energy, health, and stress. We found the similarities and differences between the responses of nurses and doctors. According to the differences in self-reported health and wellbeing labels between nurses and doctors, and the correlations among their labels, we proposed a job-role based multitask and multilabel deep learning model, where we modeled physiological and behavioral data for nurses and doctors simultaneously to predict participants’ next day’s multidimensional self-reported health and wellbeing status. Our model showed significantly better performances than baseline models and previous state-of-the-art models in the evaluations of binary/3-class classification and regression prediction tasks. We also found features related to heart rate, sleep, and work shift contributed to shift workers’ health and wellbeing. 
    more » « less
  5. Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups. 
    more » « less