skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: What should we do with Emotion AI? Towards an Agenda for the Next 30 Years
What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing.  more » « less
Award ID(s):
2335974
PAR ID:
10595801
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400711145
Page Range / eLocation ID:
98 to 101
Subject(s) / Keyword(s):
Emotion recognition Emotion Artificial Intelligence Artificial Intelligence Affective Computing
Format(s):
Medium: X
Location:
San Jose Costa Rica
Sponsoring Org:
National Science Foundation
More Like this
  1. Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies' imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions' (or 'non-expected' emotions') stigmatization, equating mental health with propensity for crime, and lack of data subjects' agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings' implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality. 
    more » « less
  2. Gurney, Nikolos; Sukthankar, Gita (Ed.)
    Computational emotion, is naturally predicated on an operating theory of emotion. This paper seeks to explore the prevalence of three different approaches in the literature, namely basic emotion, dimensional emotion, and constructed emotion. Basic emotion maintains that there exists a discrete set of primitive emotions evolved as responses to certain stimuli; dimensional emotion sees different emotions as systematically related by two or more dimensions (typically valence and arousal); and constructed emotion describes emotional experience as a function of the brain’s general predictive faculties applied to learned social concepts of different emotions. In order to see how these approaches are represented in affective computing literature, we conduct a systematic survey spanning the IEEE, ACM, ScienceDirect, and Engineering Village databases. Out of 204 selected papers, 151 apply basic emotion theory, 48 apply dimensional emotion, and 5 apply constructed emotion. We find promising representation of the constructed emotion theory in the affective computing literature and conclude that it provides a theoretical basis worth pursuing for affective engagement human computer interaction (HCI) applications. 
    more » « less
  3. Workplaces are increasingly adopting emotion AI, promising benefits to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and define an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace. 
    more » « less
  4. Social computing platforms facilitate interpersonal harms that manifest across online and physical realms such as sexual violence between online daters and sexual grooming through social media. Risk detection AI has emerged as an approach to preventing such harms, however a myopic focus on computational performance has been criticized in HCI literature for failing to consider how users should interact with risk detection AI to stay safe. In this paper we report an interview study with woman-identifying online daters (n=20) about how they envision interacting with risk detection AI and how risk detection models can be designed pursuant to such interactions. In accordance with this goal, we engaged women in risk detection model building exercises to build their own risk detection models. Findings show that women anticipate interacting with risk detection AI to augment - not replace - their personal risk assessment strategies. They likewise designed risk detection models to amplify their subjective and admittedly biased indicators of risk. Design implications involve the notion of personalizable risk detection models, but also ethical concerns around perpetuating problematic stereotypes associated with risk. 
    more » « less
  5. Despite debates about emotion artificial intelligence's (EAI) validity, legality, and social consequences, EAI is increasingly present in the high stakes context of hiring, with potential to shape the future of work and the workforce. The values laden in technology play a significant role in its societal impact.We conducted qualitative content analysis on the public-facing websites (N=229) of EAI hiring services. We identify the organizational problems that EAI hiring services claim to solve and reveal the values emerging in desired EAI uses as promoted by EAI hiring services to solve organizational problems. Our findings show that EAI hiring services market their technologies as technosolutions to three purported organizational hiring problems: 1) hiring (in)accuracy, 2) hiring (mis)fit, and 3) hiring (in)authenticity. We unpack these problems to expose how these desired uses of EAI are legitimized by the corporate ideals of data-driven decision making, continuous improvement, precision, loyalty, and stability. We identify the unfair and deceptive mechanisms by which EAI hiring services claim to solve the purported organizational hiring problems, suggesting that they unfairly exclude and exploit job candidates through EAI's creation, extraction, and affective commodification of a candidate's affective value through pseudoscientific approaches. Lastly, we interrogate EAI hiring service claims to reveal the core values that underpin their stated desired use: techno-omnipresence, techno-omnipotence, and techno-omniscience. We show how EAI hiring services position desired use of their technology as a moral imperative for hiring organizations with supreme capabilities to solve organizational hiring problems, then discuss implications for fairness, ethics, and policy in EAI-enabled hiring within the US policy landscape. 
    more » « less