skip to main content


Title: A Comparative Study on Single-handed Keyboards on Large-screen Mobile Devices
Many questions regarding single-hand text entry on modern smartphones (in particular, large-screen smartphones) remain under-explored, such as, (i) will the existing prevailing single-handed keyboards fit for large-screen smartphone users? and (ii) will individual customization improve single-handed keyboard performance? In this paper we study single-handed typing behaviors on several representative keyboards on large-screen mobile devices.We found that, (i) the user-adaptable-shape curved keyboard performs best among all the studied keyboards; (ii) users’ familiarity with the Qwerty layout plays a significant role at the beginning, but after several sessions of training, the user-adaptable curved keyboard can have the best learning curve and performs best; (iii) generally the statistical decoding algorithms via spatial and language models can well handle the input noise from single-handed typing.  more » « less
Award ID(s):
2005430
NSF-PAR ID:
10359115
Author(s) / Creator(s):
;
Editor(s):
Bottoni, Paolo; Panizzi, Emanuele
Date Published:
Journal Name:
AVI 2022: Proceedings of the 2022 International Conference on Advanced Visual Interfaces
Page Range / eLocation ID:
4:1 - 4:9
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile devices typically rely on entry-point and other one-time authentication mechanisms such as a password, PIN, fingerprint, iris, or face. But these authentication types are prone to a wide attack vector and worse 1 INTRODUCTION Currently smartphones are predominantly protected a patterned password is prone to smudge attacks, and fingerprint scanning is prone to spoof attacks. Other forms of attacks include video capture and shoulder surfing. Given the increasingly important roles smartphones play in e-commerce and other operations where security is crucial, there lies a strong need of continuous authentication mechanisms to complement and enhance one-time authentication such that even if the authentication at the point of login gets compromised, the device is still unobtrusively protected by additional security measures in a continuous fashion. The research community has investigated several continuous authentication mechanisms based on unique human behavioral traits, including typing, swiping, and gait. To this end, we focus on investigating physiological traits. While interacting with hand-held devices, individuals strive to achieve stability and precision. This is because a certain degree of stability is required in order to manipulate and interact successfully with smartphones, while precision is needed for tasks such as touching or tapping a small target on the touch screen (Sitov´a et al., 2015). As a result, to achieve stability and precision, individuals tend to develop their own postural preferences, such as holding a phone with one or both hands, supporting hands on the sides of upper torso and interacting, keeping the phone on the table and typing with the preferred finger, setting the phone on knees while sitting crosslegged and typing, supporting both elbows on chair handles and typing. On the other hand, physiological traits, such as hand-size, grip strength, muscles, age, 424 Ray, A., Hou, D., Schuckers, S. and Barbir, A. Continuous Authentication based on Hand Micro-movement during Smartphone Form Filling by Seated Human Subjects. DOI: 10.5220/0010225804240431 In Proceedings of the 7th International Conference on Information Systems Security and Privacy (ICISSP 2021), pages 424-431 ISBN: 978-989-758-491-6 Copyrightc 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved still, once compromised, fail to protect the user’s account and data. In contrast, continuous authentication, based on traits of human behavior, can offer additional security measures in the device to authenticate against unauthorized users, even after the entry-point and one-time authentication has been compromised. To this end, we have collected a new data-set of multiple behavioral biometric modalities (49 users) when a user fills out an account recovery form in sitting using an Android app. These include motion events (acceleration and angular velocity), touch and swipe events, keystrokes, and pattern tracing. In this paper, we focus on authentication based on motion events by evaluating a set of score level fusion techniques to authenticate users based on the acceleration and angular velocity data. The best EERs of 2.4% and 6.9% for intra- and inter-session respectively, are achieved by fusing acceleration and angular velocity using Nandakumar et al.’s likelihood ratio (LR) based score fusion. 
    more » « less
  2. null (Ed.)
    Accessible onscreen keyboards require people who are blind to keep out their phone at all times to search for visual affordances they cannot see. Is it possible to re-imagine text entry without a reference screen? To explore this question, we introduce screenless keyboards as aural flows (keyflows): rapid auditory streams of Text-To-Speech (TTS) characters controllable by hand gestures. In a study, 20 screen-reader users experienced keyflows to perform initial text entry. Typing took inordinately longer than current screen-based keyboards, but most participants preferred screen-free text entry to current methods, especially for short messages on-the-go. We model navigation strategies that participants enacted to aurally browse entirely auditory keyboards and discuss their limitation and benefits for daily access. Our work points to trade-offs in user performance and user experience for situations when blind users may trade typing speed with the benefit of being untethered from the screen. 
    more » « less
  3. null (Ed.)
    Many computing tasks, such as comparison shopping, two-factor authentication, and checking movie reviews, require using multiple apps together. On large screens, "windows, icons, menus, pointer" (WIMP) graphical user interfaces (GUIs) support easy sharing of content and context between multiple apps. So, it is straightforward to see the content from one application and write something relevant in another application, such as looking at the map around a place and typing walking instructions into an email. However, although today's smartphones also use GUIs, they have small screens and limited windowing support, making it hard to switch contexts and exchange data between apps. We introduce DoThisHere, a multimodal interaction technique that streamlines cross-app tasks and reduces the burden these tasks impose on users. Users can use voice to refer to information or app features that are off-screen and touch to specify where the relevant information should be inserted or is displayed. With DoThisHere, users can access information from or carry information to other apps with less context switching. We conducted a survey to find out what cross-app tasks people are currently performing or wish to perform on their smartphones. Among the 125 tasks that we collected from 75 participants, we found that 59 of these tasks are not well supported currently. DoThisHere is helpful in completing 95% of these unsupported tasks. A user study, where users are shown the list of supported voice commands when performing a representative sample of such tasks, suggests that DoThisHere may reduce expert users' cognitive load; the Query action, in particular, can help users reduce task completion time. 
    more » « less
  4. Abstract We present a method for mining the web for text entered on mobile devices. Using searching, crawling, and parsing techniques, we locate text that can be reliably identified as originating from 300 mobile devices. This includes 341,000 sentences written on iPhones alone. Our data enables a richer understanding of how users type “in the wild” on their mobile devices. We compare text and error characteristics of different device types, such as touchscreen phones, phones with physical keyboards, and tablet computers. Using our mined data, we train language models and evaluate these models on mobile test data. A mixture model trained on our mined data, Twitter, blog, and forum data predicts mobile text better than baseline models. Using phone and smartwatch typing data from 135 users, we demonstrate our models improve the recognition accuracy and word predictions of a state-of-the-art touchscreen virtual keyboard decoder. Finally, we make our language models and mined dataset available to other researchers. 
    more » « less
  5. Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and as such assistive technologies used by BVI users, i.e., screen reader and screen magnifier, are not geared for efficient interaction with such scattered content. Specifically, for blind screen reader users, content navigation is predominantly one-dimensional despite the support for skipping content, and therefore navigating to-and-fro between different parts of the webpage is tedious and frustrating. Similarly, low vision screen magnifier users have to continuously pan back-and-forth between different portions of a webpage, given that only a portion of the screen is viewable at any instant due to content enlargement. The extant techniques to overcome inefficient web interaction for BVI users have mostly focused on general web-browsing activities, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for facilitating quick and easy access to desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom machine learning-based algorithms to automatically extract auxiliary segments on any webpage containing data records; and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted auxiliary segments using either basic keyboard shortcuts or mouse actions. Evaluation studies with 14 blind participants and 16 low vision participants showed significant improvement in web usability with InSupport, driven by increased reduction in interaction time and user effort, compared to the state-of-the-art solutions. 
    more » « less