The mental health crisis in the United States spotlights the need for more scalable training for mental health workers. While present-day AI systems have sparked hope for addressing this problem, we must not be too quick to incorporate or solely focus on technological advancements. We must ask empirical questions about how to ethically collaborate with and integrate autonomous AI into the clinical workplace. For these Human-Autonomy Teams (HATs), poised to make the leap into the mental health domain, special consideration around the construct of trust is in order. A reflexive look toward the multidisciplinary nature of such HAT projects illuminates the need for a deeper dive into varied stakeholder considerations of ethics and trust. In this paper, we investigate the impact of domain---and the ranges of expertise within domains---on ethics- and trust-related considerations for HATs in mental health. We outline our engagement of 23 participants in two speculative activities: design fiction and factorial survey vignettes. Grounded by a video storyboard prototype, AI- and Psychotherapy-domain experts and novices alike imagined TEAMMAIT, a prospective AI system for psychotherapy training. From our inductive analysis emerged 10 themes surrounding ethics, trust, and collaboration. Three can be seen as substantial barriers to trust and collaboration, where participants imagined they would not work with an AI teammate that didn't meet these ethical standards. Another five of the themes can be seen as interrelated, context-dependent, and variable factors of trust that impact collaboration with an AI teammate. The final two themes represent more explicit engagement with the prospective role of an AI teammate in psychotherapy training practices. We conclude by evaluating our findings through the lens of Mayer et al.'s Integrative Model of Organizational Trust to discuss the risks of HATs and adapt models of ability-, benevolence-, and integrity-based trust. These updates motivate implications for the design and integration of HATs in mental health work. 
                        more » 
                        « less   
                    
                            
                            Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization
                        
                    
    
            Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with “knobs”. Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called “knobs” are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no meaningful connection with the underlying model or data is employed to enhance trust, we refer to the result as “trust junk.” From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or “evil,” and distill our findings into observed pitfalls for the responsible design of human-AI systems. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2311574
- PAR ID:
- 10533367
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-9380-4
- Page Range / eLocation ID:
- 22 to 31
- Format(s):
- Medium: X
- Location:
- Tokyo, Japan
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1] . Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2] . With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3] . But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4] , [5] . When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6] . However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.more » « less
- 
            Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.more » « less
- 
            Trust is fundamental to effective visual data communication between the visualization designer and the reader. Although personal experience and preference influence readers’ trust in visualizations, visualization designers can leverage design techniques to create visualizations that evoke a "calibrated trust," at which readers arrive after critically evaluating the information presented. To systematically understand what drives readers to engage in "calibrated trust," we must first equip ourselves with reliable and valid methods for measuring trust. Computer science and data visualization researchers have not yet reached a consensus on a trust definition or metric, which are essential to building a comprehensive trust model in human-data interaction. On the other hand, social scientists and behavioral economists have developed and perfected metrics that can measure generalized and interpersonal trust, which the visualization community can reference, modify, and adapt for our needs. In this paper, we gather existing methods for evaluating trust from other disciplines and discuss how we might use them to measure, define, and model trust in data visualization research. Specifically, we discuss quantitative surveys from social sciences, trust games from behavioral economics, measuring trust through measuring belief updating, and measuring trust through perceptual methods. We assess the potential issues with these methods and consider how we can systematically apply them to visualization research.more » « less
- 
            Abstract Artificial intelligence (AI) represents technologies with human‐like cognitive abilities to learn, perform, and make decisions. AI in precision agriculture (PA) enables farmers and farm managers to deploy highly targeted and precise farming practices based on site‐specific agroclimatic field measurements. The foundational and applied development of AI has matured considerably over the last 30 years. The time is now right to engage seriously with the ethics and responsible practice of AI for the well‐being of farmers and farm managers. In this paper, we identify and discuss both challenges and opportunities for improving farmers’ trust in those providing AI solutions for PA. We highlight that farmers’ trust can be moderated by how the benefits and risks of AI are perceived, shared, and distributed. We propose four recommendations for improving farmers’ trust. First, AI developers should improve model transparency and explainability. Second, clear responsibility and accountability should be assigned to AI decisions. Third, concerns about the fairness of AI need to be overcome to improve human‐machine partnerships in agriculture. Finally, regulation and voluntary compliance of data ownership, privacy, and security are needed, if AI systems are to become accepted and used by farmers.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    