skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Investigating a Combination of Input Modalities, Canvas Geometries, and Inking Triggers on On-Air Handwriting in Virtual Reality
Humans communicate by writing, often taking notes that assist thinking. With the growing popularity of collaborative Virtual Reality (VR) applications, it is imperative that we better understand aspects that affect writing in these virtual experiences. On-air writing in VR is a popular writing paradigm due to its simplicity in implementation without any explicit needs for specialized hardware. A host of factors can affect the efficacy of this writing paradigm and in this work, we delved into investigating the same. Along these lines, we investigated the effects of a combination of factors on users’ on-air writing performance, aiming to understand the circumstances under which users can both effectively and efficiently write in VR. We were interested in studying the effects of the following factors: (1) input modality: brush vs. near-field raycast vs. pointing gesture, (2) inking trigger method: haptic feedback vs. button based trigger, and (3) canvas geometry: plane vs. hemisphere. To evaluate the writing performance, we conducted an empirical evaluation with thirty participants, requiring them to write the words we indicated under different combinations of these factors. Dependent measures including the writing speed, accuracy rates, perceived workloads, and so on, were analyzed. Results revealed that the brush based input modality produced the best results in writing performance, that haptic feedback was not always effective over button based triggering, and that there are trade-offs associated with the different types of canvas geometries used. This work attempts at laying a foundation for future investigations that seek to understand and further improve the on-air writing experience in immersive virtual environments.  more » « less
Award ID(s):
2007435
PAR ID:
10437598
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Applied Perception
Volume:
19
Issue:
4
ISSN:
1544-3558
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As virtual reality (VR) technology sees more use in various fields, there is a greater need to understand how to effectively design dynamic virtual environments. As of now, there is still uncertainty in how well users of a VR system are capable of tracking moving targets in a virtual space. In this work, we examined the influence of sensory modality and visual feedback on the accuracy of head-gaze moving target tracking. To this end, a between subjects study was conducted wherein participants would receive targets that were visual, auditory, or audiovisual. Each participant performed two blocks of experimental trials, with a calibration block in between. Results indicate that audiovisual targets promoted greater improvement in tracking performance over single-modality targets, and that audio-only targets are more difficult to track than those of other modalities. 
    more » « less
  2. We present VRHapticDrones, a system utilizing quadcopters as levitating haptic feedback proxy. A touchable surface is attached to the side of the quadcopters to provide unintrusive, flexible, and programmable haptic feedback in virtual reality. Since the users' sense of presence in virtual reality is a crucial factor for the overall user experience, our system simulates haptic feedback of virtual objects. Quadcopters are dynamically positioned to provide haptic feedback relative to the physical interaction space of the user. In a first user study, we demonstrate that haptic feedback provided by VRHapticDrones significantly increases users' sense of presence compared to vibrotactile controllers and interactions without additional haptic feedback. In a second user study, we explored the quality of induced feedback regarding the expected feeling of different objects. Results show that VRHapticDrones is best suited to simulate objects that are expected to feel either light-weight or have yielding surfaces. With VRHapticDrones we contribute a solution to provide unintrusive and flexible feedback as well as insights for future VR haptic feedback systems. 
    more » « less
  3. Abstract Recent immersive mixed reality (MR) and virtual reality (VR) displays enable users to use their hands to interact with both veridical and virtual environments simultaneously. Therefore, it becomes important to understand the performance of human hand-reaching movement in MR. Studies have shown that different virtual environment visualization modalities can affect point-to-point reaching performance using a stylus, but it is not yet known if these effects translate to direct human-hand interactions in mixed reality. This paper focuses on evaluating human point-to-point motor performance in MR and VR for both finger-pointing and cup-placement tasks. Six performance measures relevant to haptic interface design were measured for both tasks under several different visualization conditions (“MR with indicator,” “MR without indicator,” and “VR”) to determine what factors contribute to hand-reaching performance. A key finding was evidence of a trade-off between reaching “motion confidence” measures (indicated by throughput, number of corrective movements, and peak velocity) and “accuracy” measures (indicated by end-point error and initial movement error). Specifically, we observed that participants tended to be more confident in the “MR without Indicator” condition for finger-pointing tasks. These results contribute critical knowledge to inform the design of VR/MR interfaces based on the application's user performance requirements. 
    more » « less
  4. null (Ed.)
    Technological advancements and increased access have prompted the adoption of head- mounted display based virtual reality (VR) for neuroscientific research, manual skill training, and neurological rehabilitation. Applications that focus on manual interaction within the virtual environment (VE), especially haptic-free VR, critically depend on virtual hand-object collision detection. Knowledge about how multisensory integration related to hand-object collisions affects perception-action dynamics and reach-to-grasp coordination is needed to enhance the immersiveness of interactive VR. Here, we explored whether and to what extent sensory substitution for haptic feedback of hand-object collision (visual, audio, or audiovisual) and collider size (size of spherical pointers representing the fingertips) influences reach-to-grasp kinematics. In Study 1, visual, auditory, or combined feedback were compared as sensory substitutes to indicate the successful grasp of a virtual object during reach-to-grasp actions. In Study 2, participants reached to grasp virtual objects using spherical colliders of different diameters to test if virtual collider size impacts reach-to-grasp. Our data indicate that collider size but not sensory feedback modality significantly affected the kinematics of grasping. Larger colliders led to a smaller size-normalized peak aperture. We discuss this finding in the context of a possible influence of spherical collider size on the perception of the virtual object’s size and hence effects on motor planning of reach-to-grasp. Critically, reach-to-grasp spatiotemporal coordination patterns were robust to manipulations of sensory feedback modality and spherical collider size, suggesting that the nervous system adjusted the reach (transport) component commensurately to the changes in the grasp (aperture) component. These results have important implications for research, commercial, industrial, and clinical applications of VR. 
    more » « less
  5. Abstract Domain users (DUs) with a knowledge base in specialized fields are frequently excluded from authoring virtual reality (VR)-based applications in corresponding fields. This is largely due to the requirement of VR programming expertise needed to author these applications. To address this concern, we developed VRFromX, a system workflow design to make the virtual content creation process accessible to DUs irrespective of their programming skills and experience. VRFromX provides an in situ process of content creation in VR that (a) allows users to select regions of interest in scanned point clouds or sketch in mid-air using a brush tool to retrieve virtual models and (b) then attach behavioral properties to those objects. Using a welding use case, we performed a usability evaluation of VRFromX with 20 DUs from which 12 were novices in VR programming. Study results indicated positive user ratings for the system features with no significant differences across users with or without VR programming expertise. Based on the qualitative feedback, we also implemented two other use cases to demonstrate potential applications. We envision that the solution can facilitate the adoption of the immersive technology to create meaningful virtual environments. 
    more » « less