skip to main content


Search for: All records

Award ID contains: 2046072

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. External ventricular drain (EVD) is a common, yet challenging neurosurgical procedure of placing a catheter into the brain ventricular system that requires prolonged training for surgeons to improve the catheter placement accuracy. In this paper, we introduce NeuroLens, an Augmented Reality (AR) system that provides neurosurgeons with guidance that aides them in completing an EVD catheter placement. NeuroLens builds on prior work in AR-assisted EVD to present a registered hologram of a patient’s ventricles to the surgeons, and uniquely incorporates guidance on the EVD catheter’s trajectory, angle of insertion, and distance to the target. The guidance is enabled by tracking the EVD catheter. We evaluate NeuroLens via a study with 33 medical students, in which we analyzed students’ EVD catheter insertion accuracy and completion time, eye gaze patterns, and qualitative responses. Our study, in which NeuroLens was used to aid students in inserting an EVD catheter into a realistic phantom model of a human head, demonstrated the potential of NeuroLens as a tool that will aid and educate novice neurosurgeons. On average, the use of NeuroLens improved the EVD placement accuracy of year 1 students by 39.4% and of the year 2−4 students by 45.7%. Furthermore, students who focused more on NeuroLens-provided contextual guidance achieved better results. 
    more » « less
  2. Demand is growing for markerless augmented reality (AR) experiences, but designers of the real-world spaces that host them still have to rely on inexact, qualitative guidelines on the visual environment to try and facilitate accurate pose tracking. Furthermore, the need for visual texture to support markerless AR is often at odds with human aesthetic preferences, and understanding how to balance these competing requirements is challenging due to the siloed nature of the relevant research areas. To address this, we present an integrated design methodology for AR spaces, that incorporates both tracking and human factors into the design process. On the tracking side, we develop the first VI-SLAM evaluation technique that combines the flexibility and control of virtual environments with real inertial data. We use it to perform systematic, quantitative experiments on the effect of visual texture on pose estimation accuracy; through 2000 trials in 20 environments, we reveal the impact of both texture complexity and edge strength. On the human side, we show how virtual reality (VR) can be used to evaluate user satisfaction with environments, and highlight how this can be tailored to AR research and use cases. Finally, we demonstrate our integrated design methodology with a case study on AR museum design, in which we conduct both VI-SLAM evaluations and a VR-based user study of four different museum environments. 
    more » « less
  3. Meditation, a mental and physical exercise which helps to focus attention and reduce stress has gained more popularity in recent years. However, meditation requires a concerted effort and regular practice. To explore the feasibility of using Augmented Reality (AR) Devices to assist in meditating, we recruited ten subjects to perform a five-minute meditation task integrated into AR devices. Heart Rate, Heart Rate Variability, and skin conductance response (SCR) are analyzed based on an Electrocardiogram (ECG), Electrodermal activity to monitor the physiological changes during and after a meditation session. Additionally, participants filled out surveys containing the Perceived Stress Questionnaire (PSQ), a clinically validated survey designed to evaluate stress levels before and after meditation to analyze the change in stress levels. Finally, we found significant differences in Heart Rate and Mean SCR Recovery Time for participants between the three study procedure periods (before, during, and after guided meditation). 
    more » « less
  4. Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtual objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR. 
    more » « less
  5. Mobile augmented reality (AR) has the potential to enable immersive, natural interactions between humans and cyber-physical systems. In particular markerless AR, by not relying on fiducial markers or predefined images, provides great convenience and flexibility for users. However, unwanted virtual object movement frequently occurs in markerless smartphone AR due to inaccurate scene understanding, and resulting errors in device pose tracking. We examine the factors which may affect virtual object stability, design experiments to measure it, and conduct systematic quantitative characterizations across six different user actions and five different smartphone configurations. Our study demonstrates noticeable instances of spatial instability in virtual objects in all but the simplest settings (with position errors of greater than 10cm even on the best-performing smartphones), and underscores the need for further enhancements to pose tracking algorithms for smartphone-based markerless AR. 
    more » « less
  6. Recent advances in eye tracking have given birth to a new genre of gaze-based context sensing applications, ranging from cognitive load estimation to emotion recognition. To achieve state-of-the-art recognition accuracy, a large-scale, labeled eye movement dataset is needed to train deep learning-based classifiers. However, due to the heterogeneity in human visual behavior, as well as the labor-intensive and privacy-compromising data collection process, datasets for gaze-based activity recognition are scarce and hard to collect. To alleviate the sparse gaze data problem, we present EyeSyn, a novel suite of psychology-inspired generative models that leverages only publicly available images and videos to synthesize a realistic and arbitrarily large eye movement dataset. Taking gaze-based museum activity recognition as a case study, our evaluation demonstrates that EyeSyn can not only replicate the distinct pat-terns in the actual gaze signals that are captured by an eye tracking device, but also simulate the signal diversity that results from different measurement setups and subject heterogeneity. Moreover, in the few-shot learning scenario, EyeSyn can be readily incorporated with either transfer learning or meta-learning to achieve 90% accuracy, without the need for a large-scale dataset for training. 
    more » « less
  7. Robust pervasive context-aware augmented reality (AR) has the potential to enable a range of applications that support users in reaching their personal and professional goals. In such applications, AR can be used to deliver richer, more immersive, and more timely just in time adaptive interventions (JITAI) than conventional mobile solutions, leading to more effective support of the user. This position paper defines a research agenda centered on improving AR applications' environmental, user, and social context awareness. Specifically, we argue for two key architectural approaches that will allow pushing AR context awareness to the next level: use of wearable and Internet of Things (IoT) devices as additional data streams that complement the data captured by the AR devices, and the development of edge computing-based mechanisms for enriching existing scene understanding and simultaneous localization and mapping (SLAM) algorithms. The paper outlines a collection of specific research directions in the development of such architectures and in the design of next-generation environmental, user, and social context awareness algorithms. 
    more » « less
  8. Augmented Reality (AR) is increasingly used in medical applications for visualizing medical information. In this paper, we present an AR-assisted surgical guidance system that aims to improve the accuracy of catheter placement in ventriculostomy, a common neurosurgical procedure. We build upon previous work on neurosurgical AR, which has focused on enabling the surgeon to visualize a patient’s ventricular anatomy, to additionally integrate surgical tool tracking and contextual guidance. Specifically, using accurate tracking of optical markers via an external multi-camera OptiTrack system, we enable Microsoft HoloLens 2-based visualizations of ventricular anatomy, catheter placement, and the information on how far the catheter tip is from its target. We describe the system we developed, present initial hologram registration results, and comment on the next steps that will prepare our system for clinical evaluations. 
    more » « less
  9. Robust pervasive context-aware augmented reality (AR) has the potential to enable a range of applications that support users in reaching their personal and professional goals. In such applications, AR can be used to deliver richer, more immersive, and more timely just in time adaptive interventions (JITAI) than conventional mo-bile solutions, leading to more effective support of the user. This position paper defines a research agenda centered on improving AR applications' environmental, user, and social context awareness. Specifically, we argue for two key architectural approaches that will allow pushing AR context awareness to the next level: use of wearable and Internet of Things (IoT) devices as additional data streams that complement the data captured by the AR devices, and the development of edge computing-based mechanisms for enriching existing scene understanding and simultaneous localization and mapping (SLAM) algorithms. The paper outlines a collection of specific research directions in the development of such architectures and in the design of next-generation environmental, user, and social context awareness algorithms. 
    more » « less
  10. Mobile Augmented Reality (AR) demands realistic rendering of virtual content that seamlessly blends into the physical environment. For this reason, AR headsets and recent smartphones are increasingly equipped with Time-of-Flight (ToF) cameras to acquire depth maps of a scene in real-time. ToF cameras are cheap and fast, however, they suffer from several issues that affect the quality of depth data, ultimately hampering their use for mobile AR. Among them, scale errors of virtual objects - appearing much bigger or smaller than what they should be - are particularly noticeable and unpleasant. This article specifically addresses these challenges by proposing InDepth, a real-time depth inpainting system based on edge computing. InDepth employs a novel deep neural network (DNN) architecture to improve the accuracy of depth maps obtained from ToF cameras. The DNN fills holes and corrects artifacts in the depth maps with high accuracy and eight times lower inference time than the state of the art. An extensive performance evaluation in real settings shows that InDepth reduces the mean absolute error by a factor of four with respect to ARCore DepthLab. Finally, a user study reveals that InDepth is effective in rendering correctly-scaled virtual objects, outperforming DepthLab. 
    more » « less