skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Accelerating Text Communication via Abbreviated Sentence Input
Typing every character in a text message may require more time or effort than strictly necessary. Skipping spaces or other characters may be able to speed input and also reduce a user's physical input effort. This can be particularly important for people with motor impairments. In a large crowdsourced study, we found workers frequently abbreviated text by omitting mid-word vowels. We designed a recognizer optimized for noisy input where users often omit spaces and mid-word vowels. We show using neural language models for selecting training text and rescoring sentences improved accuracy. On noisy touchscreen data collected from hundreds of users, we found accurate abbreviated input was possible even if a third of characters were omitted. Finally, in a study where users had to dwell for a second on each key, sentence abbreviated input was competitive with a conventional keyboard with word predictions. After practice, users wrote abbreviated sentences at 9.6 words-per-minute versus word input at 9.9 words-per-minute.  more » « less
Award ID(s):
1750193
PAR ID:
10283249
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conf erence on Natural Language Processing
Page Range / eLocation ID:
6574 to 6588
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We investigate typing on a QWERTY keyboard rendered in virtual reality. Our system tracks users’ hands in the virtual environment via a Leap Motion mounted on the front of a head mounted display. This allows typing on an auto-correcting midair keyboard without the need for auxiliary input devices such as gloves or handheld controllers. It supports input via the index fingers of one or both hands. We compare two keyboard designs: a normal QWERTY layout and a split layout. We found users typed at around 16 words-per-minute using one or both index fingers on the normal layout, and about 15 words-per-minute using both index fingers on the split layout. Users had a corrected error rate below 2% in all cases. To explore midair typing with limited or no visual feedback, we had users type on an invisible keyboard. Users typed on this keyboard at 11 words-per-minute at an error rate of 3.3% despite the keyboard providing almost no visual feedback. 
    more » « less
  2. Theories of reading posit that decisions about “where” and “when” to move the eyes are driven by visual and linguistic factors, extracted from the perceptual span and word identification span, respectively. We tested this hypothesized dissociation by masking, outside of a visible window, either the spaces between the words (to assess the perceptual span, Experiment 1) or the letters within the words (to assess the word identification span, Experiment 2). We also investigated whether deaf readers’ previously reported larger reading span was specifically linked to one of these spans. We analyzed reading rate to test overall reading efficiency, as well as average saccade length to test “where” decisions and average fixation duration to test “when” decisions. Both hearing and deaf readers’ perceptual spans extended between 10 and 14 characters, and their word identification spans extended to eight characters to the right of fixation. Despite similar sized rightward spans, deaf readers read more efficiently overall and showed a larger increase in reading rate when leftward text was available, suggesting they attend more to leftward information. Neither rightward span was specifically related to where or when decisions for either group. Our results challenge the assumed dissociation between type of reading span and type of saccade decision and indicate that reading efficiency requires access to both perceptual and linguistic information in the parafovea. 
    more » « less
  3. Text input on mobile devices without physical keys can be challenging for people who are blind or low-vision. We interview 12 blind adults about their experiences with current mobile text input to provide insights into what sorts of interface improvements may be the most beneficial. We identify three primary themes that were experiences or opinions shared by participants: the poor accuracy of dictation, difficulty entering text in noisy environments, and difficulty correcting errors in entered text. We also discuss an experimental non-visual text input method with each participant to solicit opinions on the method and probe their willingness to learn a novel method. We find that the largest concern was the time required to learn a new technique. We find that the majority of our participants do not use word predictions while typing but instead find it faster to finish typing words manually. Finally, we distill five future directions for non-visual text input: improved dictation, less reliance on or improved audio feedback, improved error correction, reducing the barrier to entry for new methods, and more fluid non-visual word predictions. 
    more » « less
  4. Word predictions in a text entry interface can help accelerate a user’s input. This may especially be true for users who have a slow input rate due to some form of motor-impairment. The choice of how many word predictions to offer in a text entry interface is an important design decision. In this work, we offered different number of word predictions in a keyboard where able-bodied users had to dwell on a key for one second to click it. We found participants’ text entry rate did not improve with increasing number of predictions. 
    more » « less
  5. Text entry is a common and important part of many intelligent user interfaces. However, inferring a user’s intended text from their input can be challenging: motor actions can be imprecise, input sensors can be noisy, and situations or disabilities can hamper a user’s perception of interface feedback. Numerous prior studies have explored input on touchscreen phones, smartwatches, in midair, and on desktop keyboards. Based on these prior studies, we are releasing a large and diverse data set of noisy typing input consisting of thousands of sentences written by hundreds of users on QWERTY-layout keyboards. This paper describes the various subsets contained in this new research dataset as well as the data format. 
    more » « less