Title: Typing Slowly but Screen-Free: Exploring Navigation over Entirely Auditory Keyboards
Accessible onscreen keyboards require people who are blind to keep out their phone at all times to search for visual affordances they cannot see. Is it possible to re-imagine text entry without a reference screen? To explore this question, we introduce screenless keyboards as aural flows (keyflows): rapid auditory streams of Text-To-Speech (TTS) characters controllable by hand gestures. In a study, 20 screen-reader users experienced keyflows to perform initial text entry. Typing took inordinately longer than current screen-based keyboards, but most participants preferred screen-free text entry to current methods, especially for short messages on-the-go. We model navigation strategies that participants enacted to aurally browse entirely auditory keyboards and discuss their limitation and benefits for daily access. Our work points to trade-offs in user performance and user experience for situations when blind users may trade typing speed with the benefit of being untethered from the screen. more »« less
Zhang, Kunpeng; Deng, Zhigang(
, AVI 2022: Proceedings of the 2022 International Conference on Advanced Visual Interfaces)
Bottoni, Paolo; Panizzi, Emanuele
(Ed.)
Many questions regarding single-hand text entry on modern smartphones (in particular, large-screen smartphones) remain under-explored, such as, (i) will the existing prevailing single-handed keyboards fit for large-screen smartphone users? and (ii) will individual customization improve single-handed keyboard performance? In this paper we study single-handed typing behaviors on several representative keyboards on large-screen mobile devices.We found that, (i) the user-adaptable-shape curved keyboard performs best among all the studied keyboards; (ii) users’ familiarity with the Qwerty layout plays a significant role at the beginning, but after several sessions of training, the user-adaptable curved keyboard
can have the best learning curve and performs best; (iii) generally the statistical decoding algorithms via spatial and language models can well handle the input noise from single-handed typing.
Vertanen, Keith; Kristensson, Per Ola(
, Companion Proceedings of the 28th International Conference on Intelligent User Interfaces)
Text entry is a common and important part of many intelligent
user interfaces. However, inferring a user’s intended text from their
input can be challenging: motor actions can be imprecise, input
sensors can be noisy, and situations or disabilities can hamper a
user’s perception of interface feedback. Numerous prior studies
have explored input on touchscreen phones, smartwatches, in midair,
and on desktop keyboards. Based on these prior studies, we
are releasing a large and diverse data set of noisy typing input
consisting of thousands of sentences written by hundreds of users
on QWERTY-layout keyboards. This paper describes the various
subsets contained in this new research dataset as well as the data
format.
Ray, Aratrika; Hou, Daqing; Schuckers, Stephanie; Barbir, Abbie(
, 7th International Conference on Information Systems Security and Privacy)
Mobile devices typically rely on entry-point and other one-time authentication mechanisms such as a password,
PIN, fingerprint, iris, or face. But these authentication types are prone to a wide attack vector and worse
1 INTRODUCTION
Currently smartphones are predominantly protected
a patterned password is prone to smudge attacks, and
fingerprint scanning is prone to spoof attacks. Other
forms of attacks include video capture and shoulder
surfing. Given the increasingly important roles
smartphones play in e-commerce and other operations
where security is crucial, there lies a strong need
of continuous authentication mechanisms to complement
and enhance one-time authentication such that
even if the authentication at the point of login gets
compromised, the device is still unobtrusively protected
by additional security measures in a continuous
fashion.
The research community has investigated several
continuous authentication mechanisms based on
unique human behavioral traits, including typing,
swiping, and gait. To this end, we focus on investigating
physiological
traits. While interacting with hand-held devices,
individuals strive to achieve stability and precision.
This is because a certain degree of stability is
required in order to manipulate and interact successfully
with smartphones, while precision is needed for
tasks such as touching or tapping a small target on
the touch screen (Sitov´a et al., 2015). As a result,
to achieve stability and precision, individuals tend to
develop their own postural preferences, such as holding
a phone with one or both hands, supporting hands
on the sides of upper torso and interacting, keeping
the phone on the table and typing with the preferred
finger, setting the phone on knees while sitting crosslegged
and typing, supporting both elbows on chair
handles and typing. On the other hand, physiological
traits, such as hand-size, grip strength, muscles, age,
424
Ray, A., Hou, D., Schuckers, S. and Barbir, A.
Continuous Authentication based on Hand Micro-movement during Smartphone Form Filling by Seated Human Subjects.
DOI: 10.5220/0010225804240431
In Proceedings of the 7th International Conference on Information Systems Security and Privacy (ICISSP 2021), pages 424-431
ISBN: 978-989-758-491-6
Copyrightc 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved
still, once compromised, fail to protect the user’s account and data. In contrast, continuous authentication,
based on traits of human behavior, can offer additional security measures in the device to authenticate against
unauthorized users, even after the entry-point and one-time authentication has been compromised. To this end, we have collected a new data-set of multiple behavioral biometric modalities (49 users) when a user fills out an account recovery form in sitting using an Android app. These include motion events (acceleration and angular velocity), touch and swipe events, keystrokes, and pattern tracing. In this paper, we focus on authentication based on motion events by evaluating a set of score level fusion techniques to authenticate users based on the acceleration and angular velocity data. The best EERs of 2.4% and 6.9% for intra- and inter-session respectively, are achieved by fusing acceleration and angular velocity using Nandakumar et al.’s likelihood ratio (LR) based score fusion.
Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow : a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.
Karimi, Pegah; Plebani, Emanuele; Bolchini, Davide(
, IUI '21: 26th International Conference on Intelligent User Interfaces)
null
(Ed.)
Texting relies on screen-centric prompts designed for sighted users, still posing significant barriers to people who are blind and visually impaired (BVI). Can we re-imagine texting untethered from a visual display? In an interview study, 20 BVI adults shared situations surrounding their texting practices, recurrent topics of conversations, and challenges. Informed by these insights, we introduce TextFlow: a mixed-initiative context-aware system that generates entirely auditory message options relevant to the users’ location, activity, and time of the day. Users can browse and select suggested aural messages using finger-taps supported by an off-the-shelf finger-worn device, without having to hold or attend to a mobile screen. In an evaluative study, 10 BVI participants successfully interacted with TextFlow to browse and send messages in screen-free mode. The experiential response of the users shed light on the importance of bypassing the phone and accessing rapidly controllable messages at their fingertips while preserving privacy and accuracy with respect to speech or screen-based input. We discuss how non-visual access to proactive, contextual messaging can support the blind in a variety of daily scenarios.
Mathur, Reeti, Sheth, Aishwarya, Vyas, Parimal, and Bolchini, Davide. Typing Slowly but Screen-Free: Exploring Navigation over Entirely Auditory Keyboards. Retrieved from https://par.nsf.gov/biblio/10276808. ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility . Web. doi:10.1145/3308561.3353789.
Mathur, Reeti, Sheth, Aishwarya, Vyas, Parimal, and Bolchini, Davide.
"Typing Slowly but Screen-Free: Exploring Navigation over Entirely Auditory Keyboards". ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility (). Country unknown/Code not available. https://doi.org/10.1145/3308561.3353789.https://par.nsf.gov/biblio/10276808.
@article{osti_10276808,
place = {Country unknown/Code not available},
title = {Typing Slowly but Screen-Free: Exploring Navigation over Entirely Auditory Keyboards},
url = {https://par.nsf.gov/biblio/10276808},
DOI = {10.1145/3308561.3353789},
abstractNote = {Accessible onscreen keyboards require people who are blind to keep out their phone at all times to search for visual affordances they cannot see. Is it possible to re-imagine text entry without a reference screen? To explore this question, we introduce screenless keyboards as aural flows (keyflows): rapid auditory streams of Text-To-Speech (TTS) characters controllable by hand gestures. In a study, 20 screen-reader users experienced keyflows to perform initial text entry. Typing took inordinately longer than current screen-based keyboards, but most participants preferred screen-free text entry to current methods, especially for short messages on-the-go. We model navigation strategies that participants enacted to aurally browse entirely auditory keyboards and discuss their limitation and benefits for daily access. Our work points to trade-offs in user performance and user experience for situations when blind users may trade typing speed with the benefit of being untethered from the screen.},
journal = {ASSETS '19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility},
author = {Mathur, Reeti and Sheth, Aishwarya and Vyas, Parimal and Bolchini, Davide},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.