skip to main content


Title: Effect of Tactile Affordance During the Design of Extended Reality-Based Training Environments for Healthcare Contexts
In this paper, the effect of tactile affordance during the design of Extended Reality (XR) based environments is presented. Tactile affordance is one of the Human eXtended Reality Interaction (HXRI) criteria which help lay the foundation for human-centricXR-based training environments. XR-based training environments developed for two surgical procedures have been used to study the role of tactile affordance. The first XR environment is developed for the Condylar plating surgical procedure which is performed to treat the fractures of the femur bone and the second XR environment is developed to train users in endotracheal intubation. Three studies have been conducted to understand the influence of different interactionmethods to elevate tactile affordance in XR-based environments. The studies and the results of the studies have been exhaustively discussed in this paper.  more » « less
Award ID(s):
2106901 2028077
NSF-PAR ID:
10435919
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Chen, J.Y.C.
Date Published:
Journal Name:
International Conference on Human-Computer Interaction, July 23-28, 2023
Volume:
14027
Page Range / eLocation ID:
441–452
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This article provides a systematic review of research related to Human–Computer Interaction techniques supporting training and learning in various domains including medicine, healthcare, and engineering. The focus is on HCI techniques involving Extended Reality (XR) technology which encompasses Virtual Reality, Augmented Reality, and Mixed Reality. HCI-based research is assuming more importance with the rapid adoption of XR tools and techniques in various training and learning contexts including education. There are many challenges in the adoption of HCI approaches, which results in a need to have a comprehensive and systematic review of such HCI methods in various domains. This article addresses this need by providing a systematic literature review of a cross-s Q1 ection of HCI approaches involving proposed so far. The PRISMA-guided search strategy identified 1156 articles for abstract review. Irrelevant abstracts were discarded. The whole body of each article was reviewed for the remaining articles, and those that were not linked to the scope of our specific issue were also eliminated. Following the application of inclusion/exclusion criteria, 69 publications were chosen for review. This article has been divided into the following sections: Introduction; Research methodology; Literature review; Threats of validity; Future research and Conclusion. Detailed classifications (pertaining to HCI criteria and concepts, such as affordance; training, and learning techniques) have also been included based on different parameters based on the analysis of research techniques adopted by various investigators. The article concludes with a discussion of the key challenges for this HCI area along with future research directions. A review of the research outcomes from these publications underscores the potential for greater success when such HCI-based approaches are adopted during such 3D-based training interactions. Such a higher degree of success may be due to the emphasis on the design of userfriendly (and user-centric) training environments, interactions, and processes that positively impact the cognitive abilities of users and their respective learning/training experiences. We discovered data validating XR-HCI as an ascending method that brings a new paradigm by enhancing skills and safety while reducing costs and learning time through replies to three exploratory study questions. We believe that the findings of this study will aid academics in developing new research avenues that will assist XR-HCI applications to mature and become more widely adopted. 
    more » « less
  2. The combination of Visual Guidance and Extended Reality (XR) technology holds the potential to greatly improve the performance of human workforces in numerous areas, particularly industrial environments. Focusing on virtual assembly tasks and making use of different forms of supportive visualisations, this study investigates the potential of XR Visual Guidance. Set in a web-based immersive environment, our results draw from a heterogeneous pool of 199 participants. This research is designed to significantly differ from previous exploratory studies, which yielded conflicting results on user performance and associated human factors. Our results clearly show the advantages of XR Visual Guidance based on an over 50% reduction in task completion times and mistakes made; this may further be enhanced and refined using specific frameworks and other forms of visualisations/Visual Guidance. Discussing the role of other factors, such as cognitive load, motivation, and usability, this paper also seeks to provide concrete avenues for future research and practical takeaways for practitioners. 
    more » « less
  3. Gonzalez, D. (Ed.)

    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.

     
    more » « less
  4. null (Ed.)
    Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. 
    more » « less
  5. Purpose The architecture, engineering and construction (AEC) industry exists in a dynamic environment and requires several stakeholders to communicate regularly. However, evidence indicates current communication practices fail to meet the requirements of increasingly complex projects. With the advent of Industry 4.0, a trend is noted to create a digital communication environment between stakeholders. Identified as a central technology in Industry 4.0, virtual reality (VR) has the potential to supplement current communication and facilitate the digitization of the AEC industry. This paper aims to explore how VR has been applied and future research directions for communication purpose. Design/methodology/approach This research follows a systematic literature assessment methodology to summarize the results of 41 research articles in the last 15 years and outlines the applications of VR in facilitating communication in the AEC domain. Findings Relevant VR applications are mainly found in building inspection, facility management, safety training, construction education and design and review. Communication tools and affordance are provided or built in several forms: text-based tools, voice chat tool, visual sharing affordance and avatars. Objective and subjective communication assessments are observed from those publications. Originality/value This review contributes to identifying the recent employment areas and future research directions of VR to facilitate communication in the AEC domain. The outcome can be a practical resource to guide both industry professionals and researchers to recognize the potentials of VR and will ultimately facilitate the creation of digital construction environments. 
    more » « less