Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies' imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions' (or 'non-expected' emotions') stigmatization, equating mental health with propensity for crime, and lack of data subjects' agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings' implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality.
more »
« less
This content will become publicly available on May 16, 2026
Mental Health Management Through Wearables and AI Innovation
Mental health disorders, affecting nearly one billion people globally, pose a silent yet pervasive threat to well-being, reducing life expectancy and straining families, workplaces, and healthcare systems. Traditional management tools, clinical interviews, questionnaires, and infrequent check-ins fall short, hampered by subjective biases and their inability to capture the nature of conditions. This chapter explores how wearable technologies, powered by advanced sensors, artificial intelligence (AI), and machine learning (ML), are revolutionizing mental health care by enabling continuous, objective monitoring. Focusing on four approaches – physiological, neurotechnological, contactless, and multimodal we analyze their mechanisms, applications, and transformative potential. These innovations promise proactive care, early intervention, and greater accessibility, yet face challenges. By integrating AI and refining device design, wearable technologies could redefine mental health management, empowering field, though their success hinges on overcoming technical and ethical hurdles.
more »
« less
- PAR ID:
- 10647852
- Publisher / Repository:
- IGI Global
- Date Published:
- Page Range / eLocation ID:
- 193 to 212
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Advances in computer science and data-analytic methods are driving a new era in mental health research and application. Artificial intelligence (AI) technologies hold the potential to enhance the assessment, diagnosis, and treatment of people experiencing mental health problems and to increase the reach and impact of mental health care. However, AI applications will not mitigate mental health disparities if they are built from historical data that reflect underlying social biases and inequities. AI models biased against sensitive classes could reinforce and even perpetuate existing inequities if these models create legacies that differentially impact who is diagnosed and treated, and how effectively. The current article reviews the health-equity implications of applying AI to mental health problems, outlines state-of-the-art methods for assessing and mitigating algorithmic bias, and presents a call to action to guide the development of fair-aware AI in psychological science.more » « less
-
Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups.more » « less
-
The development of digital instruments for mental health monitoring using biosensor data from wearable devices can enable remote, longitudinal, and objective quantitative benchmarks. To survey developments and trends in this field, we conducted a systematic review of artificial intelligence (AI) models using data from wearable biosensors to predict mental health conditions and symptoms. Following PRISMA guidelines, we identified 48 studies using a variety of wearable and smartphone biosensors including heart rate, heart rate variability (HRV), electrodermal activity/galvanic skin response (EDA/GSR), and digital proxies for biosignals such as accelerometry, location, audio, and usage metadata. We observed several technical and methodological challenges across studies in this field, including lack of ecological validity, data heterogeneity, small sample sizes, and battery drainage issues. We outline several corresponding opportunities for advancement in the field of AI-driven biosensing for mental health.more » « less
-
Chronic stress has been associated with a variety of pathophysiological risks including developing mental illness. Conversely, appropriate stress management, can be used to foster mental wellness proactively. Yet, there is no existing method that accurately and objectively monitors stress. With recent advances in electronic-skin (e-skin) and wearable technologies, it is possible to design devices that continuously measure physiological parameters linked to chronic stress and other mental health and wellness conditions. However, the design approach should be different from conventional wearables due to considerations like signal-to-noise ratio and the risk of stigmatization. Here, we present a multi-part study that combines user-centered design with engineering-centered data collection to inform future design efforts. To assess human factors, we conducted an n=24 participant design probe study that examined perceptions of an e-skin for mental health and wellness as well as preferred wear locations. We complement this with an n=10 and n=16 participant data collection study to measure physiological signals at several potential wear locations. By balancing human factors and biosignals, we conclude that the upper arm and forearm are optimal wear locations.more » « less
An official website of the United States government
