skip to main content


Search for: All records

Creators/Authors contains: "Borst, Christoph W."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 9, 2024
  2. Free, publicly-accessible full text available October 9, 2024
  3. Abstract

    A “virtual mirror” is a promising interface for virtual or augmented reality applications in which users benefit from seeing themselves within the environment, such as serious games for rehabilitation exercise or biological education. While there is extensive work analyzing pointing and providing assistance for first-person perspectives, mirrored third-person perspectives have been minimally considered, limiting the quality of user interactions in current virtual mirror applications. We address this gap with two user studies aimed at understanding pointing motions with a mirror view and assessing visual cues that assist pointing. An initial two-phase preliminary study had users tune and test nine different visual aids. This was followed by in-depth testing of the best four of those visual aids compared with unaided pointing. Results give insight into both aided and unaided pointing with this mirrored third-person view, and compare visual cues. We note a pattern of consistently pointing far in front of targets when first introduced to the pointing task, but that initial unaided motion improves after practice with visual aids. We found that the presence of stereoscopy is not sufficient for enhancing accuracy, supporting the use of other visual cues that we developed. We show that users perform pointing differently when pointing behind and in front of themselves. We finally suggest which visual aids are most promising for 3D pointing in virtual mirror interfaces.

     
    more » « less
  4. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less