skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 1, 2026

Title: Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality
Award ID(s):
2237485
PAR ID:
10630386
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Visualization and Computer Graphics
Volume:
31
Issue:
5
ISSN:
1077-2626
Page Range / eLocation ID:
2557 to 2567
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ghandeharizadeh S. (Ed.)
    This paper provides an overview of different forms of reality, comparing and contrasting them with one another. It argues the definition of the term "reality" is ambiguous. This motivates an internalization of elements from a technology standpoint, e.g., biological, 3D printed, Flying Light Speck illuminations, etc. 
    more » « less
  2. As the development of extended reality technologies bring us closer to what some call the metaverse, it is valuable to investigate how our perception of color translates from physical, reflective objects to emissive and transparent virtual renderings. Colorimetry quantifies color stimuli and color differences, and color appearance models account for adaptation and illuminance level. However, these tools do not extent satisfactorily to the novel viewing experiences of extended reality. Ongoing research aims to understand the perception of layered virtual stimuli in optical see-through augmented reality with the goal of improving or extending color appearance models. This will help ensure robust, predictable color reproduction in extended reality experiences. 
    more » « less
  3. As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45°and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective. 
    more » « less
  4. Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR. 
    more » « less