Emotion recognition technologies, while critiqued for bias, validity, and privacy invasion, continue to be developed and applied in a range of domains including in high-stakes settings like the workplace. We set out to examine emotion recognition technologies proposed for use in the workplace, describing the input data and training, outputs, and actions that these systems take or prompt. We use these design features to reflect on these technologies' implications using the ethical speculation lens. We analyzed patent applications that developed emotion recognition technologies to be used in the workplace (N=86). We found that these technologies scope data collection broadly; claim to reveal not only targets' emotional expressions, but also their internal states; and take or prompt a wide range of actions, many of which impact workers' employment and livelihoods. Technologies described in patent applications frequently violated existing guidelines for ethical automated emotion recognition technology. We demonstrate the utility of using patent applications for ethical speculation. In doing so, we suggest that 1) increasing the visibility of claimed emotional states has the potential to create additional emotional labor for workers (a burden that is disproportionately distributed to low-power and marginalized workers) and contribute to a larger pattern of blurring boundaries between expectations of the workplace and a worker's autonomy, and more broadly to the data colonialism regime; 2) Emotion recognition technology's failures can be invisible, may inappropriately influence high-stakes workplace decisions and can exacerbate inequity. We discuss the implications of making emotions and emotional data visible in the workplace and submit for consideration implications for designers of emotion recognition, employers who use them, and policymakers.
more »
« less
Patent Applications as Glimpses into the Sociotechnical Imaginary: Ethical Speculation on the Imagined Futures of Emotion AI for Mental Health Monitoring and Detection
Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies' imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions' (or 'non-expected' emotions') stigmatization, equating mental health with propensity for crime, and lack of data subjects' agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings' implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality.
more »
« less
- PAR ID:
- 10516800
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 8
- Issue:
- CSCW1
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 43
- Subject(s) / Keyword(s):
- Emotion Artificial Intelligence, Emotion AI, Emotion Recognition, Mental Health, Data Subjects, Healthcare, AI Ethics, Ethical Speculation, Sociotechnical Imaginary, Patents
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups.more » « less
-
For autistic individuals, navigating social and emotional interactions can be complex, often involving disproportionately high cognitive labor in contrast to neurotypical conversation partners. Through a novel approach to speculative co-design, autistic adults explored affective imaginaries — imagined futuristic technology interventions — to probe a provocative question: What if technology could translate emotions like it can translate spoken language? The resulting speculative prototype for an image-enabled emotion translator chat application included: (1) a visual system for representing personalized emotion taxonomies, and (2) a Wizard of Oz implementation of these taxonomies in a low-fidelity chat application. Although wary of technology that purports to understand emotions, autistic participants saw value in being able to deploy visual emotion taxonomies during chats with neurotypical conversation partners. This work shows that affective technology should enable users to: (1) curate encodings of emotions used in system artifacts, (2) enhance interactive emotional understanding, and (3) have agency over how and when to use emotion features.more » « less
-
Mental health disorders, affecting nearly one billion people globally, pose a silent yet pervasive threat to well-being, reducing life expectancy and straining families, workplaces, and healthcare systems. Traditional management tools, clinical interviews, questionnaires, and infrequent check-ins fall short, hampered by subjective biases and their inability to capture the nature of conditions. This chapter explores how wearable technologies, powered by advanced sensors, artificial intelligence (AI), and machine learning (ML), are revolutionizing mental health care by enabling continuous, objective monitoring. Focusing on four approaches – physiological, neurotechnological, contactless, and multimodal we analyze their mechanisms, applications, and transformative potential. These innovations promise proactive care, early intervention, and greater accessibility, yet face challenges. By integrating AI and refining device design, wearable technologies could redefine mental health management, empowering field, though their success hinges on overcoming technical and ethical hurdles.more » « less
-
What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing.more » « less
An official website of the United States government

