Ghandeharizadeh S.; Oria V.
(, First International Conference on Holodecks)
Ghandeharizadeh S.
(Ed.)
This paper provides an overview of different forms of reality, comparing and contrasting them with one another. It argues the definition of the term "reality" is ambiguous. This motivates an internalization of elements from a technology standpoint, e.g., biological, 3D printed, Flying Light Speck illuminations, etc.
Murdoch, Michael J.
(, Proceedings of 3rd International Symposium for Color Science and Art 2022)
As the development of extended reality technologies bring us closer to what some call the metaverse, it is valuable to investigate how our perception of color translates from physical, reflective objects to emissive and transparent virtual renderings. Colorimetry quantifies color stimuli and color differences, and color appearance models account for adaptation and illuminance level. However, these tools do not extent satisfactorily to the novel viewing experiences of extended reality. Ongoing research aims to understand the perception of layered virtual stimuli in optical see-through augmented reality with the goal of improving or extending color appearance models. This will help ensure robust, predictable color reproduction in extended reality experiences.
Gagnon, Holly; Stefanucci, Jeanine; Creem-Regehr, Sarah; Bodenheimer, Bobby
(, ACM Transactions on Applied Perception)
As applications for virtual reality (VR) and augmented reality (AR) technology increase, it will be important to understand how users perceive their action capabilities in virtual environments. Feedback about actions may help to calibrate perception for action opportunities (affordances) so that action judgments in VR and AR mirror actors’ real abilities. Previous work indicates that walking through a virtual doorway while wielding an object can calibrate the perception of one’s passability through feedback from collisions. In the current study, we aimed to replicate this calibration through feedback using a different paradigm in VR while also testing whether this calibration transfers to AR. Participants held a pole at 45°and made passability judgments in AR (pretest phase). Then, they made passability judgments in VR and received feedback on those judgments by walking through a virtual doorway while holding the pole (calibration phase). Participants then returned to AR to make posttest passability judgments. Results indicate that feedback calibrated participants’ judgments in VR. Moreover, this calibration transferred to the AR environment. In other words, after experiencing feedback in VR, passability judgments in VR and in AR became closer to an actor’s actual ability, which could make training applications in these technologies more effective.
Qian, Xun; He, Fengming; Hu, Xiyun; Wang, Tianyi; Ipsita, Ananya; Ramani, Karthik
(, CHI Conference on Human Factors in Computing Systems)
Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR.
Chandio, Yasra, Interrantes, Victoria, and Anwar, Fatima M. Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality. Retrieved from https://par.nsf.gov/biblio/10630386. IEEE Transactions on Visualization and Computer Graphics 31.5 Web. doi:10.1109/TVCG.2025.3549580.
Chandio, Yasra, Interrantes, Victoria, & Anwar, Fatima M. Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality. IEEE Transactions on Visualization and Computer Graphics, 31 (5). Retrieved from https://par.nsf.gov/biblio/10630386. https://doi.org/10.1109/TVCG.2025.3549580
Chandio, Yasra, Interrantes, Victoria, and Anwar, Fatima M.
"Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality". IEEE Transactions on Visualization and Computer Graphics 31 (5). Country unknown/Code not available: IEEE. https://doi.org/10.1109/TVCG.2025.3549580.https://par.nsf.gov/biblio/10630386.
@article{osti_10630386,
place = {Country unknown/Code not available},
title = {Tap into Reality: Understanding the Impact of Interactions on Presence and Reaction Time in Mixed Reality},
url = {https://par.nsf.gov/biblio/10630386},
DOI = {10.1109/TVCG.2025.3549580},
abstractNote = {},
journal = {IEEE Transactions on Visualization and Computer Graphics},
volume = {31},
number = {5},
publisher = {IEEE},
author = {Chandio, Yasra and Interrantes, Victoria and Anwar, Fatima M},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.