skip to main content

Title: Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane Without Visual Feedback
Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows more » BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. « less
; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
ISMAR 2020 - International Symposium on Mixed and Augmented Reality
Page Range or eLocation-ID:
51 to 56
Sponsoring Org:
National Science Foundation
More Like this
  1. The popular concepts of Virtual Reality (VR) and Augmented Reality (AR) arose from our ability to interact with objects and environments that appear to be real, but are not. One of the most powerful aspects of these paradigms is the ability of virtual entities to embody a richness of behavior and appearance that we perceive as compatible with reality, and yet unconstrained by reality. The freedom to be or do almost anything helps to reinforce the notion that such virtual entities are inherently distinct from the real world—as if they were magical. This independent magical status is reinforced by themore »typical need for the use of “magic glasses” (head-worn displays) and “magic wands” (spatial interaction devices) that are ceremoniously bestowed on a chosen few. For those individuals, the experience is inherently egocentric in nature—the sights and sounds effectively emanate from the magic glasses, not the real world, and unlike the magic we are accustomed to from cinema, the virtual entities are unable to affect the real world. This separation of real and virtual is also inherent in our related conceptual frameworks, such as Milgram’s Virtuality Continuum, where the real and virtual are explicitly distinguished and mixed. While these frameworks are indeed conceptual, we often feel the need to position our systems and research somewhere in the continuum, further reinforcing the notion that real and virtual are distinct. The very structures of our professional societies, our research communities, our journals, and our conferences tend to solidify the evolutionary separation of the virtual from the real. However, independent forces are emerging that could reshape our notions of what is real and virtual, and transform our sense of what it means to interact with technology. First, even within the VR/AR communities, as the appearance and behavioral realism of virtual entities improves, virtual experiences will become more real. Second, as domains such as artificial intelligence, robotics, and the Internet of Things (IoT) mature and permeate throughout our lives, experiences with real things will become more virtual. The convergence of these various domains has the potential to transform the egocentric magical nature of VR/AR into more pervasive allocentric magical experiences and interfaces that interact with and can affect the real world. This transformation will blur traditional technological boundaries such that experiences will no longer be distinguished as real or virtual, and our sense for what is natural will evolve to include what we once remember as cinematic magic.« less
  2. Dini, Petre (Ed.)
    The National Academy of Engineering’s “Fourteen Grand Challenges for Engineering in the Twenty-First Century” identifies challenges in science and technology that are both feasible and sustainable to help people and the planet prosper. Four of these challenges are: advance personalized learning, enhance virtual reality, make solar energy affordable and provide access to clean water. In this work, the authors discuss developing of applications using immersive technologies, such as Virtual Reality (VR) and Augmented Reality (AR) and their significance in addressing four of the challenges. The Drinking Water AR mobile application helps users easily locate drinking water sources inside Auburn Universitymore »(AU) campus, thus providing easy access to clean water. The Sun Path mobile application helps users visualize Sun’s path at any given time and location. Students study Sun path in various fields but often have a hard time visualizing and conceptualizing it, therefore the application can help. Similarly, the application could possibly assist the users in efficient solar panel placement. Architects often study Sun path to evaluate solar panel placement at a particular location. An effective solar panel placement helps optimize degree of efficiency of using the solar energy. The Solar System Oculus Quest VR application enables users in viewing all eight planets and the Sun in the solar system. Planets are simulated to mimic their position, scale, and rotation relative to the Sun. Using the Oculus Quest controllers, disguised as human hands in the scene, users can teleport within the world view, and can get closer to each planet and the Sun to have a better view of the objects and the text associated with the objects. As a result, tailored learning is aided, and Virtual Reality is enhanced. In a camp held virtually, due to Covid-19, K12 students were introduced to the concept and usability of the applications. Likert scales metric was used to assess the efficacy of application usage. The data shows that participants of this camp benefited from an immersive learning experience that allowed for simulation with inclusion of VR and AR.« less
  3. Spatial reasoning skills contribute to performance in many STEM fields. For example, drawing sectional views of three-dimensional objects is an essential skill for engineering students. There is considerable variation in the spatial reasoning skills of prospective engineering students, putting some at risk for compromised performance in their classes. This study takes place in a first-year engineering Spatial Visualization course to integrate recent practices in engineering design education with cognitive psychology research on the nature of spatial learning. We employed three main pedagogical strategies in the course - 1) in class instruction on sketching; 2) spatial visualization training; and 3) manipulationmore »of physical objects (CAD/3D print creations). This course endeavors to use current technology, online accessibility, and implementation of the three pedagogical strategies to bring about student growth in spatial reasoning. This study is designed to determine the effect of adding two different spatial reasoning training apps to this environment. Over 230 students (three sections) participated in our study. In two of the three sections, students received interactive spatial visualization training using either a spatial visualization mobile touchscreen app in one section or an Augmented Reality (AR) app in the other section. Research suggests that there are benefits to using the Spatial Vis Classroom mobile app for college students.The app has been shown to increase student persistence resulting in large learning gains as measured by the Purdue assessment of spatial visualization (PSVT-R), especially for students starting with poor spatial visualization skills. The Spatial Vis Classroom app can be used in the classroom or assigned as homework. The AR app is designed to help users develop their mental rotation abilities. It is designed to support a holistic understanding of 3-dimensional objects, and research has shown that, in combination with a traditional curriculum, it increases students’ abilities also measured by the PSVT-R. Of particular interest, the data suggest that the app overcomes the advantage found by males over females in a traditional class alone focused on spatial reasoning. Both of the course sections were required to use the apps for approximately the same time in class and outside of class. Students in the control section were required to do hand sketching activities in class and outside of class, with roughly the same completion time as for the sections with the apps. Students grades were not affected by using the three different approaches as grading was based on completion only. Based on current literature, we hypothesize that overall benefits (PSVT-R gains) will be comparable across the 3 treatments but there will be different effects on attitude and engagement (confidence,enjoyment, and self-efficacy). Lastly, we hypothesize that the treatments will have different effects on male/female and ethnic categories of the study participants. The final paper will include an analysis of results and a report of the findings.« less
  4. The goal of this study was to evaluate driver risk behavior in response to changes in their risk perception inputs, specifically focusing on the effect of augmented visual representation technologies. This experiment was conducted for the purely real-driving scenario, establishing a baseline by which future, augmented visual representation scenarios can be compared. Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) simulation technologies have rapidly improved over the last three decades to where, today, they are widely used and more heavily relied upon than before, particularly in the areas of training, research, and design. The resulting utilization of thesemore »capabilities has proven simulation technologies to be a versatile and powerful tool. Virtual immersion, however, introduces a layer of abstraction and safety between the participant and the designed artifact, which includes an associated risk compensation. Quantifying and modeling the relationship between this risk compensation and levels of virtual immersion is the greater goal of this project. This study focuses on the first step, which is to determine the level of risk perception for a purely real environment for a specific man-machine system - a ground vehicle – operated in a common risk scenario – traversing a curve at high speeds. Specifically, passengers are asked to assess whether the vehicle speed within a constant-radius curve is perceived as comfortable. Due to the potential for learning effects to influence risk perception, the experiment was split into two separate protocols: the latent response protocol and the learned response protocol. The latent response protocol applied to the first exposure of an experimental condition to the subject. It consisted of having the subjects in the passenger seat assess comfort or discomfort within a vehicle that was driven around a curve at a randomlychosen value among a selection of test speeds; subjects were asked to indicate when they felt uncomfortable by pressing a brake pedal that was instrumented to alert the driver. Next, the learned response protocol assessed the subjects for repeated exposures but allowing subjects to use brake and throttle pedals to indicate if they wanted to go faster or slower; the goal was to allow subjects to iterate toward their maximum comfortable speed. These pedals were instrumented to alert the driver who responded accordingly. Both protocols were repeated for a second curve with a different radius. Questionnaires were also administered after each trial that addressed the subjective perception of risk and provided a means to substantiate the measured risk compensation behavior. The results showed that, as expected, the latent perception of risk for a passenger traversing a curve was higher than the learned perception for successive exposures to the same curve; in other words, as drivers ‘learned’ a curve, they were more comfortable with higher speeds. Both the latent and learned speeds provide a suitable metric by which to compare future replications of this experiment at different levels of virtual immersion. Correlations were found between uncomfortable subject responses and the yaw acceleration of the vehicle. Additional correlation of driver discomfort was found to occur at specific locations on the curves. The yaw acceleration is a reflection of the driver’s ability to maintain a steady steering input, whereas the location on the curve was found to correlate with variations in the lane-markings and environmental cues.« less
  5. Virtual reality (VR) systems have been increasingly used in recent years in various domains, such as education and training. Presence, which can be described as ‘the sense of being there’ is one of the most important user experience aspects in VR. There are several components, which may affect the level of presence, such as interaction, visual fidelity, and auditory cues. In recent years, a significant effort has been put into increasing the sense of presence in VR. This study focuses on improving user experience in VR by increasing presence through increased interaction fidelity and enhanced illusions. Interaction in real lifemore »includes mutual and bidirectional encounters between two or more individuals through shared tangible objects. However, the majority of VR interaction to date has been unidirectional. This research aims to bridge this gap by enabling bidirectional mutual tangible embodied interactions between human users and virtual characters in world-fixed VR through real-virtual shared objects that extend from virtual world into the real world. I hypothesize that the proposed novel interaction will shrink the boundary between the real and virtual worlds (through virtual characters that affect the physical world), increase the seamlessness of the VR system (enhance the illusion) and the fidelity of interaction, and increase the level of presence and social presence, enjoyment and engagement. This paper includes the motivation, design and development details of the proposed novel world-fixed VR system along with future directions.« less