The use of AI-enabled recommender systems in construction activities has the potential to improve worker performance and reduce errors; however, the accuracy of such systems in providing effective suggestions is dependent on the quality of their training data. A within-subjects experimental study was conducted using a simulated recommender system for installation tasks to investigate the effect of system reliability and construction task complexity on worker trust, workload, and performance. Results indicate that overall trust in the AI agent was higher for the highly reliable condition but remained consistent across various levels of task complexity. The workload was found to be higher for low reliability and complex conditions, and the effect of reliability on performance was influenced by task complexity. These findings offer insights for designing recommender systems to support construction workers in completing procedural tasks.
more » « less- NSF-PAR ID:
- 10470019
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Proceedings of the Human Factors and Ergonomics Society Annual Meeting
- Volume:
- 67
- Issue:
- 1
- ISSN:
- 1071-1813
- Format(s):
- Medium: X Size: p. 2005-2006
- Size(s):
- p. 2005-2006
- Sponsoring Org:
- National Science Foundation
More Like this
-
The rise of complex AI systems in healthcare and other sectors has led to a growing area of research called Explainable AI (XAI) designed to increase transparency. In this area, quantitative and qualitative studies focus on improving user trust and task performance by providing system- and prediction-level XAI features. We analyze stakeholder engagement events (interviews and workshops) on the use of AI for kidney transplantation. From this we identify themes which we use to frame a scoping literature review on current XAI features. The stakeholder engagement process lasted over nine months covering three stakeholder group's workflows, determining where AI could intervene and assessing a mock XAI decision support system. Based on the stakeholder engagement, we identify four major themes relevant to designing XAI systems – 1) use of AI predictions, 2) information included in AI predictions, 3) personalization of AI predictions for individual differences, and 4) customizing AI predictions for specific cases. Using these themes, our scoping literature review finds that providing AI predictions before, during, or after decision-making could be beneficial depending on the complexity of the stakeholder's task. Additionally, expert stakeholders like surgeons prefer minimal to no XAI features, AI prediction, and uncertainty estimates for easy use cases. However, almost all stakeholders prefer to have optional XAI features to review when needed, especially in hard-to-predict cases. The literature also suggests that providing both system and prediction-level information is necessary to build the user's mental model of the system appropriately. Although XAI features improve users' trust in the system, human-AI team performance is not always enhanced. Overall, stakeholders prefer to have agency over the XAI interface to control the level of information based on their needs and task complexity. We conclude with suggestions for future research, especially on customizing XAI features based on preferences and tasks.more » « less
-
This study aimed to investigate the key technical and psychological factors that impact the architecture, engineering, and construction (AEC) professionals’ trust in collaborative robots (cobots) powered by artificial intelligence (AI). This study seeks to address the critical knowledge gaps surrounding the establishment and reinforcement of trust among AEC professionals in their collaboration with AI-powered cobots. In the context of the construction industry, where the complexities of tasks often necessitate human–robot teamwork, understanding the technical and psychological factors influencing trust is paramount. Such trust dynamics play a pivotal role in determining the effectiveness of human–robot collaboration on construction sites. This research employed a nationwide survey of 600 AEC industry practitioners to shed light on these influential factors, providing valuable insights to calibrate trust levels and facilitate the seamless integration of AI-powered cobots into the AEC industry. Additionally, it aimed to gather insights into opportunities for promoting the adoption, cultivation, and training of a skilled workforce to effectively leverage this technology. A structural equation modeling (SEM) analysis revealed that safety and reliability are significant factors for the adoption of AI-powered cobots in construction. Fear of being replaced resulting from the use of cobots can have a substantial effect on the mental health of the affected workers. A lower error rate in jobs involving cobots, safety measurements, and security of data collected by cobots from jobsites significantly impact reliability, and the transparency of cobots’ inner workings can benefit accuracy, robustness, security, privacy, and communication and result in higher levels of automation, all of which demonstrated as contributors to trust. The study’s findings provide critical insights into the perceptions and experiences of AEC professionals toward adoption of cobots in construction and help project teams determine the adoption approach that aligns with the company’s goals workers’ welfare.more » « less
-
null (Ed.)Workplace environments have a significant impact on worker performance, health, and well-being. With machine learning capabilities, artificial intelligence (AI) can be developed to automate individualized adjustments to work environments (e.g., lighting, temperature) and to facilitate healthier worker behaviors (e.g., posture). Worker perspectives on incorporating AI into office workspaces are largely unexplored. Thus, the purpose of this study was to explore office workers’ views on including AI in their office workspace. Six focus group interviews with a total of 45 participants were conducted. Interview questions were designed to generate discussion on benefits, challenges, and pragmatic considerations for incorporating AI into office settings. Sessions were audio-recorded, transcribed, and analyzed using an iterative approach. Two primary constructs emerged. First, participants shared perspectives related to preferences and concerns regarding communication and interactions with the technology. Second, numerous conversations highlighted the dualistic nature of a system that collects large amounts of data; that is, the potential benefits for behavior change to improve health and the pitfalls of trust and privacy. Across both constructs, there was an overarching discussion related to the intersections of AI with the complexity of work performance. Numerous thoughts were shared relative to future AI solutions that could enhance the office workplace. This study’s findings indicate that the acceptability of AI in the workplace is complex and dependent upon the benefits outweighing the potential detriments. Office worker needs are complex and diverse, and AI systems should aim to accommodate individual needs.more » « less
-
Introducing robots to future construction sites will impose extra uncertainties and necessitate workers’ situational awareness (SA) of them. While previous literature has suggested that system errors, trust changes, and time pressure may affect SA, the linkage between these factors and workers’ SA in the future construction industry is understudied. Therefore, this study aimed to fill the research gap by simulating a future bricklaying worker-robot collaborative task where participants experienced robot errors and time pressure during the interaction. The results indicated that robot errors significantly impacted subjects’ trust in robots. However, under time pressure in time-critical construction tasks, workers tended to recover their reduced trust in the faulty robots (sometimes over-trust) and reduce their situational awareness. The contributions of this study lie in providing insights into the importance of SA in future jobsites and the need for investigating effective strategies for better preparing future workers.more » « less
-
Our paper explores the integration of generative artificial intelligence (GenAI) into organizations’ innovation and new product development processes, focusing on when and how to trust AI-generated outcomes in this context. We propose a framework to assess the level of trust required based on task-specific needs and the distinction between general and expert AI models. While inaccuracies in GenAI outputs can foster creativity during ideation, higher accuracy, and trust are essential for tasks requiring domain-specific expertise. The paper concludes by discussing the necessary human capabilities and organizational strategies for effectively deploying GenAI in innovation management.