skip to main content


Title: The physical-virtual table: exploring the effects of a virtual human's physical influence on social interaction
In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH’s token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH’s physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH’s ability to move physical objects to other elements in the real world. Also, the VH’s physical influence improved participants’ overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.  more » « less
Award ID(s):
1800961
NSF-PAR ID:
10105839
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
24th ACM Symposium on Virtual Reality Software and Technology
Page Range / eLocation ID:
1 to 11
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In a social context where two or more interlocutors interact with each other in the same space, one’s sense of copresence with the others is an important factor for the quality of communication and engagement in the interaction. Although augmented reality (AR) technology enables the superposition of virtual humans (VHs) as interlocutors in the real world, the resulting sense of copresence is usually far lower than with a real human interlocutor. In this paper, we describe a human-subject study in which we explored and investigated the effects that subtle multi-modal interaction between the virtual environment and the real world, where a VH and human participants were co-located, can have on copresence. We compared two levels of gradually increased multi-modal interaction: (i) virtual objects being affected by real airflow as commonly experienced with fans in summer, and (ii) a VH showing awareness of this airflow. We chose airflow as one example of an environmental factor that can noticeably affect both the real and virtual worlds, and also cause subtle responses in interlocutors.We hypothesized that our two levels of treatment would increase the sense of being together with the VH gradually, i.e., participants would report higher copresence with airflow influence than without it, and the copresence would be even higher when the VH shows awareness of the airflow. The statistical analysis with the participant-reported copresence scores showed that there was an improvement of the perceived copresence with the VH when both the physical–virtual interactivity via airflow and the VH’s awareness behaviors were present together. As the considered environmental factors are directed at the VH, i.e., they are not part of the direct interaction with the real human, they can provide a reasonably generalizable approach to support copresence in AR beyond the particular use case in the present experiment. 
    more » « less
  2. Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues. 
    more » « less
  3. Augmented Reality (AR) technologies present an exciting new medium for human-robot interactions, enabling new opportunities for both implicit and explicit human-robot communication. For example, these technologies enable physically-limited robots to execute non-verbal interaction patterns such as deictic gestures despite lacking the physical morphology necessary to do so. However, a wealth of HRI research has demonstrated real benefits to physical embodiment (compared to, e.g., virtual robots on screens), suggesting AR augmentation of virtual robot parts could face challenges.In this work, we present empirical evidence comparing the use of virtual (AR) and physical arms to perform deictic gestures that identify virtual or physical referents. Our subjective and objective results demonstrate the success of mixed reality deictic gestures in overcoming these potential limitations, and their successful use regardless of differences in physicality between gesture and referent. These results help to motivate the further deployment of mixed reality robotic systems and provide nuanced insight into the role of mixed-reality technologies in HRI contexts. 
    more » « less
  4. Objective

    This study used a virtual environment to examine how older and younger pedestrians responded to simulated augmented reality (AR) overlays that indicated the crossability of gaps in a continuous stream of traffic.

    Background

    Older adults represent a vulnerable group of pedestrians. AR has the potential to make the task of street-crossing safer and easier for older adults.

    Method

    We used an immersive virtual environment to conduct a study with age group and condition as between-subjects factors. In the control condition, older and younger participants crossed a continuous stream of traffic without simulated AR overlays. In the AR condition, older and younger participants crossed with simulated AR overlays signaling whether gaps between vehicles were safe or unsafe to cross. Participants were subsequently interviewed about their experience.

    Results

    We found that participants were more selective in their crossing decisions and took safer gaps in the AR condition as compared to the control condition. Older adult participants also reported reduced mental and physical demand in the AR condition compared to the control condition.

    Conclusion

    AR overlays that display the crossability of gaps between vehicles have the potential to make street-crossing safer and easier for older adults. Additional research is needed in more complex real-world scenarios to further examine how AR overlays impact pedestrian behavior.

    Application

    With rapid advances in autonomous vehicle and vehicle-to-pedestrian communication technologies, it is critical to study how pedestrians can be better supported. Our research provides key insights for ways to improve pedestrian safety applications using emerging technologies like AR.

     
    more » « less
  5. The goal of this study was to evaluate driver risk behavior in response to changes in their risk perception inputs, specifically focusing on the effect of augmented visual representation technologies. This experiment was conducted for the purely real-driving scenario, establishing a baseline by which future, augmented visual representation scenarios can be compared. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) simulation technologies have rapidly improved over the last three decades to where, today, they are widely used and more heavily relied upon than before, particularly in the areas of training, research, and design. The resulting utilization of these capabilities has proven simulation technologies to be a versatile and powerful tool. Virtual immersion, however, introduces a layer of abstraction and safety between the participant and the designed artifact, which includes an associated risk compensation. Quantifying and modeling the relationship between this risk compensation and levels of virtual immersion is the greater goal of this project. This study focuses on the first step, which is to determine the level of risk perception for a purely real environment for a specific man-machine system - a ground vehicle – operated in a common risk scenario – traversing a curve at high speeds. Specifically, passengers are asked to assess whether the vehicle speed within a constant-radius curve is perceived as comfortable. Due to the potential for learning effects to influence risk perception, the experiment was split into two separate protocols: the latent response protocol and the learned response protocol. The latent response protocol applied to the first exposure of an experimental condition to the subject. It consisted of having the subjects in the passenger seat assess comfort or discomfort within a vehicle that was driven around a curve at a randomlychosen value among a selection of test speeds; subjects were asked to indicate when they felt uncomfortable by pressing a brake pedal that was instrumented to alert the driver. Next, the learned response protocol assessed the subjects for repeated exposures but allowing subjects to use brake and throttle pedals to indicate if they wanted to go faster or slower; the goal was to allow subjects to iterate toward their maximum comfortable speed. These pedals were instrumented to alert the driver who responded accordingly. Both protocols were repeated for a second curve with a different radius. Questionnaires were also administered after each trial that addressed the subjective perception of risk and provided a means to substantiate the measured risk compensation behavior. The results showed that, as expected, the latent perception of risk for a passenger traversing a curve was higher than the learned perception for successive exposures to the same curve; in other words, as drivers ‘learned’ a curve, they were more comfortable with higher speeds. Both the latent and learned speeds provide a suitable metric by which to compare future replications of this experiment at different levels of virtual immersion. Correlations were found between uncomfortable subject responses and the yaw acceleration of the vehicle. Additional correlation of driver discomfort was found to occur at specific locations on the curves. The yaw acceleration is a reflection of the driver’s ability to maintain a steady steering input, whereas the location on the curve was found to correlate with variations in the lane-markings and environmental cues. 
    more » « less