skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 AM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: Typing Slowly but Screen-Free: Exploring Navigation over Entirely Auditory Keyboards
Accessible onscreen keyboards require people who are blind to keep out their phone at all times to search for visual affordances they cannot see. Is it possible to re-imagine text entry without a reference screen? To explore this question, we introduce screenless keyboards as aural flows (keyflows): rapid auditory streams of Text-To-Speech (TTS) characters controllable by hand gestures. In a study, 20 screen-reader users experienced keyflows to perform initial text entry. Typing took inordinately longer than current screen-based keyboards, but most participants preferred screen-free text entry to current methods, especially for short messages on-the-go. We model navigation strategies that participants enacted to aurally browse entirely auditory keyboards and discuss their limitation and benefits for daily access. Our work points to trade-offs in user performance and user experience for situations when blind users may trade typing speed with the benefit of being untethered from the screen.  more » « less
Award ID(s):
1909845
PAR ID:
10276808
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility
Page Range / eLocation ID:
427 to 439
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bottoni, Paolo; Panizzi, Emanuele (Ed.)
    Many questions regarding single-hand text entry on modern smartphones (in particular, large-screen smartphones) remain under-explored, such as, (i) will the existing prevailing single-handed keyboards fit for large-screen smartphone users? and (ii) will individual customization improve single-handed keyboard performance? In this paper we study single-handed typing behaviors on several representative keyboards on large-screen mobile devices.We found that, (i) the user-adaptable-shape curved keyboard performs best among all the studied keyboards; (ii) users’ familiarity with the Qwerty layout plays a significant role at the beginning, but after several sessions of training, the user-adaptable curved keyboard can have the best learning curve and performs best; (iii) generally the statistical decoding algorithms via spatial and language models can well handle the input noise from single-handed typing. 
    more » « less
  2. Text entry is a common and important part of many intelligent user interfaces. However, inferring a user’s intended text from their input can be challenging: motor actions can be imprecise, input sensors can be noisy, and situations or disabilities can hamper a user’s perception of interface feedback. Numerous prior studies have explored input on touchscreen phones, smartwatches, in midair, and on desktop keyboards. Based on these prior studies, we are releasing a large and diverse data set of noisy typing input consisting of thousands of sentences written by hundreds of users on QWERTY-layout keyboards. This paper describes the various subsets contained in this new research dataset as well as the data format. 
    more » « less
  3. Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow : a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios. 
    more » « less
  4. null (Ed.)
    Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow: a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios. 
    more » « less
  5. Abstract We present a method for mining the web for text entered on mobile devices. Using searching, crawling, and parsing techniques, we locate text that can be reliably identified as originating from 300 mobile devices. This includes 341,000 sentences written on iPhones alone. Our data enables a richer understanding of how users type “in the wild” on their mobile devices. We compare text and error characteristics of different device types, such as touchscreen phones, phones with physical keyboards, and tablet computers. Using our mined data, we train language models and evaluate these models on mobile test data. A mixture model trained on our mined data, Twitter, blog, and forum data predicts mobile text better than baseline models. Using phone and smartwatch typing data from 135 users, we demonstrate our models improve the recognition accuracy and word predictions of a state-of-the-art touchscreen virtual keyboard decoder. Finally, we make our language models and mined dataset available to other researchers. 
    more » « less