Automated Driving Systems (ADS), like many other systems people use today, depend on successful Artificial Intelligence (AI) for safe roadway operations. In ADS, an essential function completed by AI is the computer vision techniques for detecting roadway signs by vehicles. The AI, though, is not always reliable and sometimes requires the human’s intelligence to complete a task. For the human to collaborate with the AI, it is critical to understand the human’s perception of AI. In the present study, we investigated how human drivers perceive the AI’s capabilities in a driving context where a stop sign is compromised and how knowledge, experience, and trust related to AI play a role. We found that participants with more knowledge of AI tended to trust AI more, and those who reported more experience with AI had a greater understanding of AI. Participants correctly deduced that a maliciously manipulated stop sign would be more difficult for AI to identify. Nevertheless, participants still overestimated the AI’s ability to recognize the malicious stop sign. Our findings suggest that the public do not yet have a sufficiently accurate understanding of specific AI systems, which leads them to over-trust the AI in certain conditions.
more »
« less
American public opinion on artificial intelligence in healthcare
Billions of dollars are being invested into developing medical artificial intelligence (AI) systems and yet public opinion of AI in the medical field seems to be mixed. Although high expectations for the future of medical AI do exist in the American public, anxiety and uncertainty about what it can do and how it works is widespread. Continuing evaluation of public opinion on AI in healthcare is necessary to ensure alignment between patient attitudes and the technologies adopted. We conducted a representative-sample survey (total N = 203) to measure the trust of the American public towards medical AI. Primarily, we contrasted preferences for AI and human professionals to be medical decision-makers. Additionally, we measured expectations for the impact and use of medical AI in the future. We present four noteworthy results: (1) The general public strongly prefers human medical professionals make medical decisions, while at the same time believing they are more likely to make culturally biased decisions than AI. (2) The general public is more comfortable with a human reading their medical records than an AI, both now and “100 years from now.” (3) The general public is nearly evenly split between those who would trust their own doctor to use AI and those who would not. (4) Respondents expect AI will improve medical treatment but more so in the distant future than immediately.
more »
« less
- Award ID(s):
- 1927227
- PAR ID:
- 10478014
- Editor(s):
- Mahmoud, Ali B.
- Publisher / Repository:
- Public Library of Science
- Date Published:
- Journal Name:
- PLOS ONE
- Volume:
- 18
- Issue:
- 11
- ISSN:
- 1932-6203
- Page Range / eLocation ID:
- e0294028
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
AI algorithms are increasingly influencing decision-making in criminal justice, including tasks such as predicting recidivism and identifying suspects by their facial features. The increasing reliance on machine-assisted legal decision-making impacts the rights of criminal defendants, the work of law enforcement agents, the legal strategies taken by attorneys, the decisions made by judges, and the public’s trust in courts. As such, it is crucial to understand how the use of AI is perceived by the professionals who interact with algorithms. The analysis explores the connection between law enforcement and legal professionals’ stated and behavioral trust. Results from three rigorous survey experiments suggest that law enforcement and legal professionals express skepticism about algorithms but demonstrate a willingness to integrate their recommendations into their own decisions and, thus, do not exhibit “algorithm aversion.” These findings suggest that there could be a tendency towards increased reliance on machine-assisted legal decision-making despite concerns about the impact of AI on the rights of criminal defendants.more » « less
-
AI-enabled decision-support systems aim to help medical providers rapidly make decisions with limited information during medical emergencies. A critical challenge in developing these systems is supporting providers in interpreting the system output to make optimal treatment decisions. In this study, we designed and evaluated an AI-enabled decision-support system to aid providers in treating patients with traumatic injuries. We first conducted user research with physicians to identify and design information types and AI outputs for a decision-support display. We then conducted an online experiment with 35 medical providers from six health systems to evaluate two human-AI interaction strategies: (1) AI information synthesis and (2) AI information and recommendations. We found that providers were more likely to make correct decisions when AI information and recommendations were provided compared to receiving no AI support. We also identified two socio-technical barriers to providing AI recommendations during time-critical medical events: (1) an accuracy-time trade-off in providing recommendations and (2) polarizing perceptions of recommendations between providers. We discuss three implications for developing AI-enabled decision support used in time-critical events, contributing to the limited research on human-AI interaction in this context.more » « less
-
The increased integration of artificial intelligence (AI) technologies in human workflows has resulted in a new paradigm of AI-assisted decision making,in which an AI model provides decision recommendations while humans make the final decisions. To best support humans in decision making, it is critical to obtain a quantitative understanding of how humans interact with and rely on AI. Previous studies often model humans' reliance on AI as an analytical process, i.e., reliance decisions are made based on cost-benefit analysis. However, theoretical models in psychology suggest that the reliance decisions can often be driven by emotions like humans' trust in AI models. In this paper, we propose a hidden Markov model to capture the affective process underlying the human-AI interaction in AI-assisted decision making, by characterizing how decision makers adjust their trust in AI over time and make reliance decisions based on their trust. Evaluations on real human behavior data collected from human-subject experiments show that the proposed model outperforms various baselines in accurately predicting humans' reliance behavior in AI-assisted decision making. Based on the proposed model, we further provide insights into how humans' trust and reliance dynamics in AI-assisted decision making is influenced by contextual factors like decision stakes and their interaction experiences.more » « less
-
The Internet certainly disrupted our understanding of what communication can be, who does it, how, and to what effect. What constitutes the Internet has always been an evolving suite of technologies and a dynamic set of social norms, rules, and patterns of use. But the shape and character of digital communications are shifting again—the browser is no longer the primary means by which most people encounter information infrastructure. The bulk of digital communications are no longer between people but between devices, about people, over the Internet of things. Political actors make use of technological proxies in the form of proprietary algorithms and semiautomated social actors—political bots—in subtle attempts to manipulate public opinion. These tools are scaffolding for human control, but the way they work to afford such control over interaction and organization can be unpredictable, even to those who build them. So to understand contemporary political communication—and modern communication broadly—we must now investigate the politics of algorithms and automation.more » « less
An official website of the United States government

