Patent applications provide insight into how inventors imagine and legitimize uses of their imagined technologies; as part of this imagining they envision social worlds and produce sociotechnical imaginaries. Examining sociotechnical imaginaries is important for emerging technologies in high-stakes contexts such as the case of emotion AI to address mental health care. We analyzed emotion AI patent applications (N=58) filed in the U.S. concerned with monitoring and detecting emotions and/or mental health. We examined the described technologies' imagined uses and the problems they were positioned to address. We found that inventors justified emotion AI inventions as solutions to issues surrounding data accuracy, care provision and experience, patient-provider communication, emotion regulation, and preventing harms attributed to mental health causes. We then applied an ethical speculation lens to anticipate the potential implications of the promissory emotion AI-enabled futures described in patent applications. We argue that such a future is one filled with mental health conditions' (or 'non-expected' emotions') stigmatization, equating mental health with propensity for crime, and lack of data subjects' agency. By framing individuals with mental health conditions as unpredictable and not capable of exercising their own agency, emotion AI mental health patent applications propose solutions that intervene in this imagined future: intensive surveillance, an emphasis on individual responsibility over structural barriers, and decontextualized behavioral change interventions. Using ethical speculation, we articulate the consequences of these discourses, raising questions about the role of emotion AI as positive, inherent, or inevitable in health and care-related contexts. We discuss our findings' implications for patent review processes, and advocate for policy makers, researchers and technologists to refer to patent (applications) to access, evaluate and (re)consider potentially harmful sociotechnical imaginaries before they become our reality.
more »
« less
This content will become publicly available on May 2, 2026
There's No "I" in TEAMMAIT: Impacts of Domain and Expertise on Trust in AI Teammates for Mental Health Work
The mental health crisis in the United States spotlights the need for more scalable training for mental health workers. While present-day AI systems have sparked hope for addressing this problem, we must not be too quick to incorporate or solely focus on technological advancements. We must ask empirical questions about how to ethically collaborate with and integrate autonomous AI into the clinical workplace. For these Human-Autonomy Teams (HATs), poised to make the leap into the mental health domain, special consideration around the construct of trust is in order. A reflexive look toward the multidisciplinary nature of such HAT projects illuminates the need for a deeper dive into varied stakeholder considerations of ethics and trust. In this paper, we investigate the impact of domain---and the ranges of expertise within domains---on ethics- and trust-related considerations for HATs in mental health. We outline our engagement of 23 participants in two speculative activities: design fiction and factorial survey vignettes. Grounded by a video storyboard prototype, AI- and Psychotherapy-domain experts and novices alike imagined TEAMMAIT, a prospective AI system for psychotherapy training. From our inductive analysis emerged 10 themes surrounding ethics, trust, and collaboration. Three can be seen as substantial barriers to trust and collaboration, where participants imagined they would not work with an AI teammate that didn't meet these ethical standards. Another five of the themes can be seen as interrelated, context-dependent, and variable factors of trust that impact collaboration with an AI teammate. The final two themes represent more explicit engagement with the prospective role of an AI teammate in psychotherapy training practices. We conclude by evaluating our findings through the lens of Mayer et al.'s Integrative Model of Organizational Trust to discuss the risks of HATs and adapt models of ability-, benevolence-, and integrity-based trust. These updates motivate implications for the design and integration of HATs in mental health work.
more »
« less
- PAR ID:
- 10630750
- Publisher / Repository:
- Association for Computing Machinery
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 9
- Issue:
- 2
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 36
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This research examines the relationship between anticipatory pushing of information and trust in human– autonomy teaming in a remotely piloted aircraft system - synthetic task environment. Two participants and one AI teammate emulated by a confederate executed a series of missions under routine and degraded conditions. We addressed the following questions: (1) How do anticipatory pushing of information and trust change from human to human and human to autonomous team members across the two sessions? and (2) How is anticipatory pushing of information associated with the trust placed in a teammate across the two sessions? This study demonstrated two main findings: (1) anticipatory pushing of information and trust differed between human-human and human-AI dyads, and (2) anticipatory pushing of information and trust scores increased among human-human dyads under degraded conditions but decreased in human-AI dyads.more » « less
-
This study aimed to investigate the key technical and psychological factors that impact the architecture, engineering, and construction (AEC) professionals’ trust in collaborative robots (cobots) powered by artificial intelligence (AI). This study seeks to address the critical knowledge gaps surrounding the establishment and reinforcement of trust among AEC professionals in their collaboration with AI-powered cobots. In the context of the construction industry, where the complexities of tasks often necessitate human–robot teamwork, understanding the technical and psychological factors influencing trust is paramount. Such trust dynamics play a pivotal role in determining the effectiveness of human–robot collaboration on construction sites. This research employed a nationwide survey of 600 AEC industry practitioners to shed light on these influential factors, providing valuable insights to calibrate trust levels and facilitate the seamless integration of AI-powered cobots into the AEC industry. Additionally, it aimed to gather insights into opportunities for promoting the adoption, cultivation, and training of a skilled workforce to effectively leverage this technology. A structural equation modeling (SEM) analysis revealed that safety and reliability are significant factors for the adoption of AI-powered cobots in construction. Fear of being replaced resulting from the use of cobots can have a substantial effect on the mental health of the affected workers. A lower error rate in jobs involving cobots, safety measurements, and security of data collected by cobots from jobsites significantly impact reliability, and the transparency of cobots’ inner workings can benefit accuracy, robustness, security, privacy, and communication and result in higher levels of automation, all of which demonstrated as contributors to trust. The study’s findings provide critical insights into the perceptions and experiences of AEC professionals toward adoption of cobots in construction and help project teams determine the adoption approach that aligns with the company’s goals workers’ welfare.more » « less
-
This paper suggests how reflective design can aid informal participatory algorithm auditing. Drawing from reflective design, we designed a simple web-form probe to invite critical reflection on Emotion AI, ethically controversial techniques predicting individuals’ emotions. Participants engaged the probe throughout their daily lives for about a week. Then, we interviewed participants about their experiences and reflections. Our findings surface themes around participants’ (i) critiques of Emotion AI, (ii) factors contributing to inaccuracy, and (iii) patterns of miscategorization. Our discussion contributes (1) recommendations for Emotion AI and (2) how reflective design may offer considerations to inform algorithm auditing. Overall, our paper suggests ways critically-oriented design research can engage AI ethics through informal, participatory, exploratory algorithm auditing.more » « less
-
The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems – 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.more » « less
An official website of the United States government
