The mental health crisis in the United States spotlights the need for more scalable training for mental health workers. While present-day AI systems have sparked hope for addressing this problem, we must not be too quick to incorporate or solely focus on technological advancements. We must ask empirical questions about how to ethically collaborate with and integrate autonomous AI into the clinical workplace. For these Human-Autonomy Teams (HATs), poised to make the leap into the mental health domain, special consideration around the construct of trust is in order. A reflexive look toward the multidisciplinary nature of such HAT projects illuminates the need for a deeper dive into varied stakeholder considerations of ethics and trust. In this paper, we investigate the impact of domain---and the ranges of expertise within domains---on ethics- and trust-related considerations for HATs in mental health. We outline our engagement of 23 participants in two speculative activities: design fiction and factorial survey vignettes. Grounded by a video storyboard prototype, AI- and Psychotherapy-domain experts and novices alike imagined TEAMMAIT, a prospective AI system for psychotherapy training. From our inductive analysis emerged 10 themes surrounding ethics, trust, and collaboration. Three can be seen as substantial barriers to trust and collaboration, where participants imagined they would not work with an AI teammate that didn't meet these ethical standards. Another five of the themes can be seen as interrelated, context-dependent, and variable factors of trust that impact collaboration with an AI teammate. The final two themes represent more explicit engagement with the prospective role of an AI teammate in psychotherapy training practices. We conclude by evaluating our findings through the lens of Mayer et al.'s Integrative Model of Organizational Trust to discuss the risks of HATs and adapt models of ability-, benevolence-, and integrity-based trust. These updates motivate implications for the design and integration of HATs in mental health work.
more »
« less
Assessing Communication and Trust in an AI Teammate in a Dynamic Task Environment
This research examines the relationship between anticipatory pushing of information and trust in human– autonomy teaming in a remotely piloted aircraft system - synthetic task environment. Two participants and one AI teammate emulated by a confederate executed a series of missions under routine and degraded conditions. We addressed the following questions: (1) How do anticipatory pushing of information and trust change from human to human and human to autonomous team members across the two sessions? and (2) How is anticipatory pushing of information associated with the trust placed in a teammate across the two sessions? This study demonstrated two main findings: (1) anticipatory pushing of information and trust differed between human-human and human-AI dyads, and (2) anticipatory pushing of information and trust scores increased among human-human dyads under degraded conditions but decreased in human-AI dyads.
more »
« less
- Award ID(s):
- 1828010
- PAR ID:
- 10344408
- Date Published:
- Journal Name:
- 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS)
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Objective This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Background Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Method Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Results Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Conclusions Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Applications Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.more » « less
-
In activities such as dancing and sports, people synchronize behaviors in many different ways. Synchroni- zation between people has traditionally been characterized as either perfect mirroring (1:1 in-phase synchronization, spontaneous synchrony, and mimicry) or reflectional mir- roring (1:1 antiphase synchronization), but most activities require partners to synchronize more complicated patterns. We asked visually coupled dyads to coordinate finger move- ments to perform multifrequency ratios (1:1, 2:1, 3:1, 4:1, and 5:1). Because these patterns are coordinated across and not just within individual physiological and motor systems, we based our predictions on frequency-locking dynamics, which is a general coordination principle that is not limited to physiological explanations. Twenty dyads performed five multifrequency ratios under three levels of visual coupling, with half using a subcritical visual information update rate. The dynamical principle was supported, such that multi- frequency performance tends to abide by the strictures of frequency locking. However, these constraints are relaxed if the visual information rate is beyond the critical informa- tion update rate. An analysis of turning points in the oscil- latory finger movements suggested that dyads did not rely on this visual information to stabilize coordination. How the laboratory findings align with naturalistic observa- tions of multifrequency performance in actual sports teams (Double Dutch) is discussed. Frequency-locking accounts not only for the human propensity for perfect mirroring but also for variations in performance when dyads deviate from mirroring.more » « less
-
While there is increased interest in how trust spreads in Human Autonomy Teams (HATs), most trust measurements are subjective and do not examine real-time changes in trust. To develop a trust metric that consists of objective variables influenced by trust/distrust manipulations, we conducted an Interactive hybrid Cognitive Task Analysis (IhCTA) for a Remotely Piloted Aerial System (RPAS) HAT. The IhCTA adapted parts of the hybrid Cognitive Task Analysis (hCTA) framework. In this paper, we present the four steps of the IhCTA approach, including 1) generating a scenario task overview, 2) generating teammate-specific event flow diagrams, 3) identifying interactions and interdependencies impacted by trust/distrust manipulations, and 4) processing RPAS variables based on the IhCTA to create a metric. We demonstrate the application of the metric through a case study that examines how the influence of specific interactions on team state changes before and after the spread of distrust.more » « less
-
Remote Patient Monitoring (RPM) devices transmit patients' medical indicators (e.g., blood pressure) from the patient's home testing equipment to their healthcare providers, in order to monitor chronic conditions such as hypertension. AI systems have the potential to enhance access to timely medical advice based on the data that RPM devices produce. In this paper, we report on three studies investigating how the severity of users' medical condition (normal vs. high blood pressure), security risk (low vs. modest vs. high risk), and medical advice source (human doctor vs. AI) influence user perceptions of advisor trustworthiness and willingness to disclose RPM-acquired information. We found that trust mediated the relationship between the advice source and users' willingness to disclose health information: users trust doctors more than AI and are more willing to disclose their RPM-acquired health information to a more trusted advice source. However, we unexpectedly discovered that conditional on trust, users disclose RPM-acquired information more readily to AI than to doctors. We observed that the advice source did not influence perceptions of security and privacy risks. We conclude by discussing how our findings can support the design of RPM applications.more » « less
An official website of the United States government

