skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on June 25, 2026

Title: Perceptions of Blind Adults on Non-Visual Mobile Text Entry
Text input on mobile devices without physical keys can be challenging for people who are blind or low-vision. We interview 12 blind adults about their experiences with current mobile text input to provide insights into what sorts of interface improvements may be the most beneficial. We identify three primary themes that were experiences or opinions shared by participants: the poor accuracy of dictation, difficulty entering text in noisy environments, and difficulty correcting errors in entered text. We also discuss an experimental non-visual text input method with each participant to solicit opinions on the method and probe their willingness to learn a novel method. We find that the largest concern was the time required to learn a new technique. We find that the majority of our participants do not use word predictions while typing but instead find it faster to finish typing words manually. Finally, we distill five future directions for non-visual text input: improved dictation, less reliance on or improved audio feedback, improved error correction, reducing the barrier to entry for new methods, and more fluid non-visual word predictions.  more » « less
Award ID(s):
1909248
PAR ID:
10628755
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400714023
Page Range / eLocation ID:
146 to 153
Format(s):
Medium: X
Location:
Corfu Island Greece
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Typing every character in a text message may require more time or effort than strictly necessary. Skipping spaces or other characters may be able to speed input and also reduce a user's physical input effort. This can be particularly important for people with motor impairments. In a large crowdsourced study, we found workers frequently abbreviated text by omitting mid-word vowels. We designed a recognizer optimized for noisy input where users often omit spaces and mid-word vowels. We show using neural language models for selecting training text and rescoring sentences improved accuracy. On noisy touchscreen data collected from hundreds of users, we found accurate abbreviated input was possible even if a third of characters were omitted. Finally, in a study where users had to dwell for a second on each key, sentence abbreviated input was competitive with a conventional keyboard with word predictions. After practice, users wrote abbreviated sentences at 9.6 words-per-minute versus word input at 9.9 words-per-minute. 
    more » « less
  2. Certain colors are strongly associated with certain adjectives (e.g. red is hot, blue is cold). Some of these associations are grounded in visual experiences like seeing hot embers glow red. Surprisingly, many congenitally blind people show similar color associations, despite lacking all visual experience of color. Presumably, they learn these associations via language. Can we detect these associations in the statistics of language? And if so, what form do they take? We apply a projection method to word embeddings trained on corpora of spoken and written text to identify color-adjective associations as they are represented in language. We show that these projections are predictive of color-adjective ratings collected from blind and sighted people, and that the effect size depends on the training corpus. Finally, we examine how color-adjective associations might be represented in language by training word embeddings on corpora from which various sources of color-semantic information are removed. 
    more » « less
  3. null (Ed.)
    Accessible onscreen keyboards require people who are blind to keep out their phone at all times to search for visual affordances they cannot see. Is it possible to re-imagine text entry without a reference screen? To explore this question, we introduce screenless keyboards as aural flows (keyflows): rapid auditory streams of Text-To-Speech (TTS) characters controllable by hand gestures. In a study, 20 screen-reader users experienced keyflows to perform initial text entry. Typing took inordinately longer than current screen-based keyboards, but most participants preferred screen-free text entry to current methods, especially for short messages on-the-go. We model navigation strategies that participants enacted to aurally browse entirely auditory keyboards and discuss their limitation and benefits for daily access. Our work points to trade-offs in user performance and user experience for situations when blind users may trade typing speed with the benefit of being untethered from the screen. 
    more » « less
  4. Abstract We present a method for mining the web for text entered on mobile devices. Using searching, crawling, and parsing techniques, we locate text that can be reliably identified as originating from 300 mobile devices. This includes 341,000 sentences written on iPhones alone. Our data enables a richer understanding of how users type “in the wild” on their mobile devices. We compare text and error characteristics of different device types, such as touchscreen phones, phones with physical keyboards, and tablet computers. Using our mined data, we train language models and evaluate these models on mobile test data. A mixture model trained on our mined data, Twitter, blog, and forum data predicts mobile text better than baseline models. Using phone and smartwatch typing data from 135 users, we demonstrate our models improve the recognition accuracy and word predictions of a state-of-the-art touchscreen virtual keyboard decoder. Finally, we make our language models and mined dataset available to other researchers. 
    more » « less
  5. We explore a method for presenting word suggestions for non-visual text input using simultaneous voices. We conduct two perceptual studies and investigate the impact of different presentations of voices on a user's ability to detect which voice, if any, spoke their desired word. Our sets of words simulated the word suggestions of a predictive keyboard during real-world text input. We find that when voices are simultaneous, user accuracy decreases significantly with each added word suggestion. However, adding a slight 0.15s delay between the start of each subsequent word allows two simultaneous words to be presented with no significant decrease in accuracy compared to presenting two words sequentially (84% simultaneous versus 86% sequential). This allows two word suggestions to be presented to the user 32% faster than sequential playback without decreasing accuracy. 
    more » « less