skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Comparison of Table, Wall, and Midair Mixed Reality Keyboard Locations
Typing on a midair keyboard in mixed reality can be difficult due to the lack of tactile feedback when virtual keys are tapped. Locating the keyboard over a real-world surface offers a potential way to mitigate this issue. We measured user performance and preference when a virtual keyboard was located on a table, on a wall, or in midair. Despite the additional tactile feedback offered by the table and wall locations, we found the midair location had a significantly higher entry rate with a similar error rate compared to the other locations. Participants also preferred the midair location over the other locations.  more » « less
Award ID(s):
1909089
PAR ID:
10383460
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
MobileHCI 2022 Workshop on Shaping Text Entry Research for 2030
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We investigate typing on a QWERTY keyboard rendered in virtual reality. Our system tracks users’ hands in the virtual environment via a Leap Motion mounted on the front of a head mounted display. This allows typing on an auto-correcting midair keyboard without the need for auxiliary input devices such as gloves or handheld controllers. It supports input via the index fingers of one or both hands. We compare two keyboard designs: a normal QWERTY layout and a split layout. We found users typed at around 16 words-per-minute using one or both index fingers on the normal layout, and about 15 words-per-minute using both index fingers on the split layout. Users had a corrected error rate below 2% in all cases. To explore midair typing with limited or no visual feedback, we had users type on an invisible keyboard. Users typed on this keyboard at 11 words-per-minute at an error rate of 3.3% despite the keyboard providing almost no visual feedback. 
    more » « less
  2. Accuracy and speed are pivotal when it comes to typing. Mixed reality headsets offer users the groundbreaking ability to project virtual objects into the physical world. However, when typing on a virtual keyboard in mixed reality space, users lose the tactile feedback that comes with a physical keyboard, making typing much more difficult. Our goal was to explore the capability of users to type using all ten fingers on a virtual key in mixed reality. We measured user performance when typing with index fingers versus all ten fingers. We also examined the usage of eye-tracking to disable all keys the user wasn’t looking at, and the effect it had on improving speed and accuracy. Our findings so far indicate that, while eyetracking seems to help accuracy, it is not enough to bring 10 finger typing up to the same level of performance as index finger typing. 
    more » « less
  3. null (Ed.)
    In this work, we investigate the influence that audio and visual feedback have on a manipulation task in virtual reality (VR). Without the tactile feedback of a controller, grasping virtual objects using one’s hands can result in slower interactions because it may be unclear to the user that a grasp has occurred. Providing alternative feedback, such as visual or audio cues, may lead to faster and more precise interactions, but might also affect user preference and perceived ownership of the virtual hands. In this study, we test four feedback conditions for virtual grasping. Three of the conditions provide feedback for when a grasp or release occurs, either visual, audio, or both, and one provides no feedback for these occurrences. We analyze the effect each feedback condition has on interaction performance, measure their effect on the perceived ownership of the virtual hands, and gauge user preference. In an experiment, users perform a pick-and-place task with each feedback condition. We found that audio feedback for grasping is preferred over visual feedback even though it seems to decrease grasping performance, and found that there were little to no differences in ownership between our conditions. 
    more » « less
  4. Relocation of haptic feedback from the fingertips to the wrist has been considered as a way to enable haptic interaction with mixed reality virtual environments while leaving the fingers free for other tasks. We present a pair of wrist-worn tactile haptic devices and a virtual environment to study how various mappings between fingers and tactors affect task performance. The haptic feedback rendered to the wrist reflects the interaction forces occurring between a virtual object and virtual avatars controlled by the index finger and thumb. We performed a user study comparing four different finger-to-tactor haptic feedback mappings and one no-feedback condition as a control. We evaluated users' ability to perform a simple pick-and-place task via the metrics of task completion time, path length of the fingers and virtual cube, and magnitudes of normal and shear forces at the fingertips. We found that multiple mappings were effective, and there was a greater impact when visual cues were limited. We discuss the limitations of our approach and describe next steps toward multi-degree-of-freedom haptic rendering for wrist-worn devices to improve task performance in virtual environments. 
    more » « less
  5. In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface.We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique.We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback. 
    more » « less