skip to main content


Search for: All records

Award ID contains: 2046072

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The traditional freehand placement of an external ventricular drain (EVD) relies on empirical craniometric landmarks to guide the craniostomy and subsequent passage of the EVD catheter. The diameter and trajectory of the craniostomy physically limit the possible trajectories that can be achieved during the passage of the catheter. In this study, the authors implemented a mixed reality–guided craniostomy procedure to evaluate the benefit of an optimally drilled craniostomy to the accurate placement of the catheter. Optical marker–based tracking using an OptiTrack system was used to register the brain ventricular hologram and drilling guidance for craniostomy using a HoloLens 2 mixed reality headset. A patient-specific 3D-printed skull phantom embedded with intracranial camera sensors was developed to automatically calculate the EVD accuracy for evaluation. User trials consisted of one blind and one mixed reality–assisted craniostomy followed by a routine, unguided EVD catheter placement for each of two different drill bit sizes. A total of 49 participants were included in the study (mean age 23.4 years, 59.2% female). The mean distance from the catheter target improved from 18.6 ± 12.5 mm to 12.7 ± 11.3 mm (p = 0.0008) using mixed reality guidance for trials with a large drill bit and from 19.3 ± 12.7 mm to 10.1 ± 8.4 mm with a small drill bit (p < 0.0001). Accuracy using mixed reality was improved using a smaller diameter drill bit compared with a larger bit (p = 0.039). Overall, the majority of the participants were positive about the helpfulness of mixed reality guidance and the overall mixed reality experience. Appropriate indications and use cases for the application of mixed reality guidance to neurosurgical procedures remain an area of active inquiry. While prior studies have demonstrated the benefit of mixed reality–guided catheter placement using predrilled craniostomies, the authors demonstrate that real-time quantitative and visual feedback of a mixed reality–guided craniostomy procedure can independently improve procedural accuracy and represents an important tool for trainee education and eventual clinical implementation. 
    more » « less
    Free, publicly-accessible full text available January 1, 2025
  2. Virtual content instability caused by device pose tracking error remains a prevalent issue in markerless augmented reality (AR), especially on smartphones and tablets. However, when examining environments which will host AR experiences, it is challenging to determine where those instability artifacts will occur; we rarely have access to ground truth pose to measure pose error, and even if pose error is available, traditional visualizations do not connect that data with the real environment, limiting their usefulness. To address these issues we present SiTAR (Situated Trajectory Analysis for Augmented Reality), the first situated trajectory analysis system for AR that incorporates estimates of pose tracking error. We start by developing the first uncertainty-based pose error estimation method for visual-inertial simultaneous localization and mapping (VI-SLAM), which allows us to obtain pose error estimates without ground truth; we achieve an average accuracy of up to 96.1% and an average FI score of up to 0.77 in our evaluations on four VI-SLAM datasets. Next, we present our SiTAR system, implemented for ARCore devices, combining a backend that supplies uncertainty-based pose error estimates with a frontend that generates situated trajectory visualizations. Finally, we evaluate the efficacy of SiTAR in realistic conditions by testing three visualization techniques in an in-the-wild study with 15 users and 13 diverse environments; this study reveals the impact both environment scale and the properties of surfaces present can have on user experience and task performance. 
    more » « less
    Free, publicly-accessible full text available October 16, 2024
  3. Mobile augmented reality (AR) has a wide range of promising applications, but its efficacy is subject to the impact of environment texture on both machine and human perception. Performance of the machine perception algorithm underlying accurate positioning of virtual content, visual-inertial SLAM (VI-SLAM), is known to degrade in low-texture conditions, but there is a lack of data in realistic scenarios. We address this through extensive experiments using a game engine-based emulator, with 112 textures and over 5000 trials. Conversely, human task performance and response times in AR have been shown to increase in environments perceived as textured. We investigate and provide encouraging evidence for invisible textures, which result in good VI-SLAM performance with minimal impact on human perception of virtual content. This arises from fundamental differences between VI-SLAM-based machine perception, and human perception as described by the contrast sensitivity function. Our insights open up exciting possibilities for deploying ambient IoT devices that display invisible textures, as part of systems which automatically optimize AR environments. 
    more » « less
    Free, publicly-accessible full text available October 6, 2024
  4. Edge computing is increasingly proposed as a solution for reducing resource consumption of mobile devices running simultaneous localization and mapping (SLAM) algorithms, with most edge-assisted SLAM systems assuming the communication resources between the mobile device and the edge server to be unlimited, or relying on heuristics to choose the information to be transmitted to the edge. This paper presents AdaptSLAM, an edge-assisted visual (V) and visual-inertial (VI) SLAM system that adapts to the available communication and computation resources, based on a theoretically grounded method we developed to select the subset of keyframes (the representative frames) for constructing the best local and global maps in the mobile device and the edge server under resource constraints. We implemented AdaptSLAM to work with the state-of-the-art open-source V-and VI-SLAM ORB-SLAM3 framework, and demonstrated that, under constrained network bandwidth, AdaptSLAM reduces the tracking error by 62% compared to the best baseline method. 
    more » « less
    Free, publicly-accessible full text available May 17, 2024
  5. External ventricular drain (EVD) is a common, yet challenging neurosurgical procedure of placing a catheter into the brain ventricular system that requires prolonged training for surgeons to improve the catheter placement accuracy. In this paper, we introduce NeuroLens, an Augmented Reality (AR) system that provides neurosurgeons with guidance that aides them in completing an EVD catheter placement. NeuroLens builds on prior work in AR-assisted EVD to present a registered hologram of a patient’s ventricles to the surgeons, and uniquely incorporates guidance on the EVD catheter’s trajectory, angle of insertion, and distance to the target. The guidance is enabled by tracking the EVD catheter. We evaluate NeuroLens via a study with 33 medical students, in which we analyzed students’ EVD catheter insertion accuracy and completion time, eye gaze patterns, and qualitative responses. Our study, in which NeuroLens was used to aid students in inserting an EVD catheter into a realistic phantom model of a human head, demonstrated the potential of NeuroLens as a tool that will aid and educate novice neurosurgeons. On average, the use of NeuroLens improved the EVD placement accuracy of year 1 students by 39.4% and of the year 2−4 students by 45.7%. Furthermore, students who focused more on NeuroLens-provided contextual guidance achieved better results. 
    more » « less
  6. Demand is growing for markerless augmented reality (AR) experiences, but designers of the real-world spaces that host them still have to rely on inexact, qualitative guidelines on the visual environment to try and facilitate accurate pose tracking. Furthermore, the need for visual texture to support markerless AR is often at odds with human aesthetic preferences, and understanding how to balance these competing requirements is challenging due to the siloed nature of the relevant research areas. To address this, we present an integrated design methodology for AR spaces, that incorporates both tracking and human factors into the design process. On the tracking side, we develop the first VI-SLAM evaluation technique that combines the flexibility and control of virtual environments with real inertial data. We use it to perform systematic, quantitative experiments on the effect of visual texture on pose estimation accuracy; through 2000 trials in 20 environments, we reveal the impact of both texture complexity and edge strength. On the human side, we show how virtual reality (VR) can be used to evaluate user satisfaction with environments, and highlight how this can be tailored to AR research and use cases. Finally, we demonstrate our integrated design methodology with a case study on AR museum design, in which we conduct both VI-SLAM evaluations and a VR-based user study of four different museum environments. 
    more » « less
  7. Meditation, a mental and physical exercise which helps to focus attention and reduce stress has gained more popularity in recent years. However, meditation requires a concerted effort and regular practice. To explore the feasibility of using Augmented Reality (AR) Devices to assist in meditating, we recruited ten subjects to perform a five-minute meditation task integrated into AR devices. Heart Rate, Heart Rate Variability, and skin conductance response (SCR) are analyzed based on an Electrocardiogram (ECG), Electrodermal activity to monitor the physiological changes during and after a meditation session. Additionally, participants filled out surveys containing the Perceived Stress Questionnaire (PSQ), a clinically validated survey designed to evaluate stress levels before and after meditation to analyze the change in stress levels. Finally, we found significant differences in Heart Rate and Mean SCR Recovery Time for participants between the three study procedure periods (before, during, and after guided meditation). 
    more » « less
  8. Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtual objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR. 
    more » « less
  9. Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtual objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR. 
    more » « less
  10. Recent advances in eye tracking have given birth to a new genre of gaze-based context sensing applications, ranging from cognitive load estimation to emotion recognition. To achieve state-of-the-art recognition accuracy, a large-scale, labeled eye movement dataset is needed to train deep learning-based classifiers. However, due to the heterogeneity in human visual behavior, as well as the labor-intensive and privacy-compromising data collection process, datasets for gaze-based activity recognition are scarce and hard to collect. To alleviate the sparse gaze data problem, we present EyeSyn, a novel suite of psychology-inspired generative models that leverages only publicly available images and videos to synthesize a realistic and arbitrarily large eye movement dataset. Taking gaze-based museum activity recognition as a case study, our evaluation demonstrates that EyeSyn can not only replicate the distinct pat-terns in the actual gaze signals that are captured by an eye tracking device, but also simulate the signal diversity that results from different measurement setups and subject heterogeneity. Moreover, in the few-shot learning scenario, EyeSyn can be readily incorporated with either transfer learning or meta-learning to achieve 90% accuracy, without the need for a large-scale dataset for training. 
    more » « less