skip to main content


Title: Knock, Knock. Who's There? On the Security of LG's Knock Codes
Knock Codes are a knowledge-based unlock authentication scheme used on LG smartphones where a user enters a code by tapping or "knocking" a sequence on a 2x2 grid. While a lesser-used authentication method, as compared to PINs or Android patterns, there is likely a large number of Knock Code users; we estimate, 700,000--2,500,000 in the US alone. In this paper, we studied Knock Codes security asking participants in an online study to select codes on mobile devices in three settings: a control treatment, a blocklist treatment, and a treatment with a larger, 2x3 grid. We find that Knock Codes are significantly weaker than other deployed authentication, e.g., PINs or Android patterns. In a simulated attacker setting, 2x3 grids offered no additional security. Blocklisting, on the other hand, was more beneficial, making Knock Codes' security similar to Android patterns. Participants expressed positive perceptions of Knock Codes, yet usability was challenged. SUS values were "marginal" or "ok" across treatments. Based on these findings, we recommend deploying blocklists for selecting a Knock Code because they improve security but have a limited impact on usability perceptions.  more » « less
Award ID(s):
1845300
NSF-PAR ID:
10190631
Author(s) / Creator(s):
Date Published:
Journal Name:
2020 Symposium on Usable Security and Privacy
Page Range / eLocation ID:
37-59
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Authentication has become increasingly ubiquitous for controlling access to personal computing devices (e.g., laptops, tablets, and smartphones). In this paper, we aim to understand the authentication process used by people with upper extremity impairment (UEI). A person with UEI lacks range of motion, strength, endurance, speed, and/or accuracy associated with arms, hands, or fingers. To this end, we conducted semi-structured interviews with eight (8) adults with UEI about their use of authentication for their personal computing devices. We found that our participants primarily use passwords and PINs as a verification credential during authentication. We found the process of authentication to have several accessibility issues for our participants. Consequently, our participants implemented a variety of workarounds that prioritized usability over security throughout the authentication process. Based on these findings, we present six broad subareas of research that should be explored in order to create more accessible authentication for people with UEI. 
    more » « less
  2. As account compromises and malicious online attacks are on the rise, multi-factor authentication (MFA) has been adopted to defend against these attacks. OTP and mobile push notification are just two examples of the popularly adopted MFA factors. Although MFA improve security, they also add additional steps or hardware to the authentication process, thus increasing the authentication time and introducing friction. On the other hand, keystroke dynamics-based authentication is believed to be a promising MFA for increasing security while reducing friction. While there have been several studies on the usability of other MFA factors, the usability of keystroke dynamics has not been studied. To this end, we have built a web authentication system with the standard features of signup, login and account recovery, and integrated keystroke dynamics as an additional factor. We then conducted a user study on the system where 20 participants completed tasks related to signup, login and account recovery. We have also evaluated a new approach for completing the user enrollment process, which reduces friction by naturally employing other alternative MFA factors (OTP in our study) when keystroke dynamics is not ready for use. Our study shows that while maintaining strong security (0% FPR), adding keystroke dynamics reduces authentication friction by avoiding 66.3% of OTP at login and 85.8% of OTP at account recovery, which in turn reduces the authentication time by 63.3% and 78.9% for login and account recovery respectively. Through an exit survey, all participants have rated the integration of keystroke dynamics with OTP to be more preferable to the conventional OTP-only authentication. 
    more » « less
  3. In this paper, we provide the first comprehensive study of user-chosen 4- and 6-digit PINs ($\mathbf{n=1220}$) collected on smartphones with participants being explicitly primed for device unlocking. We find that against a throttled attacker (with 10, 30, or 100 guesses, matching the smartphone unlock setting), using 6-digit PINs instead of 4-digit PINs provides little to no increase in security, and surprisingly may even decrease security. We also study the effects of blacklists, where a set of ``easy to guess'' PINs is disallowed during selection. Two such blacklists are in use today by iOS, for 4-digits (274 PINs) as well as 6-digits (2910 PINs). We extracted both blacklists compared them with four other blacklists, including a small 4-digit (27 PINs), a large 4-digit (2740 PINs), and two placebo blacklists for 4- and 6-digit PINs that always excluded the first-choice PIN. We find that relatively small blacklists in use today by iOS offer little or no benefit against a throttled guessing attack. Security gains are only observed when the blacklists are much larger, which in turn comes at the cost of increased user frustration. Our analysis suggests that a blacklist at about 10\,\% of the PIN space may provide the best balance between usability and security. 
    more » « less
  4. It takes great effort to manually or semi-automatically convert free-text phenotype narratives (e.g., morphological descriptions in taxonomic works) to a computable format before they can be used in large-scale analyses. We argue that neither a manual curation approach nor an information extraction approach based on machine learning is a sustainable solution to produce computable phenotypic data that are FAIR (Findable, Accessible, Interoperable, Reusable) (Wilkinson et al. 2016). This is because these approaches do not scale to all biodiversity, and they do not stop the publication of free-text phenotypes that would need post-publication curation. In addition, both manual and machine learning approaches face great challenges: the problem of inter-curator variation (curators interpret/convert a phenotype differently from each other) in manual curation, and keywords to ontology concept translation in automated information extraction, make it difficult for either approach to produce data that are truly FAIR. Our empirical studies show that inter-curator variation in translating phenotype characters to Entity-Quality statements (Mabee et al. 2007) is as high as 40% even within a single project. With this level of variation, curated data integrated from multiple curation projects may still not be FAIR. The key causes of this variation have been identified as semantic vagueness in original phenotype descriptions and difficulties in using standardized vocabularies (ontologies). We argue that the authors describing characters are the key to the solution. Given the right tools and appropriate attribution, the authors should be in charge of developing a project's semantics and ontology. This will speed up ontology development and improve the semantic clarity of the descriptions from the moment of publication. In this presentation, we will introduce the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists, which consists of three components: a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. a web-based, ontology-aware software application called 'Character Recorder,' which features a spreadsheet as the data entry platform and provides authors with the flexibility of using their preferred terminology in recording characters for a set of specimens (this application also facilitates semantic clarity and consistency across species descriptions); a set of services that produce RDF graph data, collects terms added by authors, detects potential conflicts between terms, dispatches conflicts to the third component and updates the ontology with resolutions; and an Android mobile application, 'Conflict Resolver,' which displays ontological conflicts and accepts solutions proposed by multiple experts. Fig. 1 shows the system diagram of the platform. The presentation will consist of: a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. a report on the findings from a recent survey of 90+ participants on the need for a tool like Character Recorder; a methods section that describes how we provide semantics to an existing vocabulary of quantitative characters through a set of properties that explain where and how a measurement (e.g., length of perigynium beak) is taken. We also report on how a custom color palette of RGB values obtained from real specimens or high-quality specimen images, can be used to help authors choose standardized color descriptions for plant specimens; and a software demonstration, where we show how Character Recorder and Conflict Resolver can work together to construct both human-readable descriptions and RDF graphs using morphological data derived from species in the plant genus Carex (sedges). The key difference of this system from other ontology-aware systems is that authors can directly add needed terms to the ontology as they wish and can update their data according to ontology updates. The software modules currently incorporated in Character Recorder and Conflict Resolver have undergone formal usability studies. We are actively recruiting Carex experts to participate in a 3-day usability study of the entire system of the Platform for Author-Driven Computable Data and Ontology Production for Taxonomists. Participants will use the platform to record 100 characters about one Carex species. In addition to usability data, we will collect the terms that participants submit to the underlying ontology and the data related to conflict resolution. Such data allow us to examine the types and the quantities of logical conflicts that may result from the terms added by the users and to use Discrete Event Simulation models to understand if and how term additions and conflict resolutions converge. We look forward to a discussion on how the tools (Character Recorder is online at http://shark.sbs.arizona.edu/chrecorder/public) described in our presentation can contribute to producing and publishing FAIR data in taxonomic studies. 
    more » « less
  5. null (Ed.)
    Our world today increasingly relies on the orchestration of digital and physical systems to ensure the successful operations of many complex and critical infrastructures. Simulation-based testbeds are useful tools for engineering those cyber-physical systems and evaluating their efficiency, security, and resilience. In this article, we present a cyber-physical system testing platform combining distributed physical computing and networking hardware and simulation models. A core component is the distributed virtual time system that enables the efficient synchronization of virtual clocks among distributed embedded Linux devices. Virtual clocks also enable high-fidelity experimentation by interrupting real and emulated cyber-physical applications to inject offline simulation data. We design and implement two modes of the distributed virtual time: periodic mode for scheduling repetitive events like sensor device measurements, and dynamic mode for on-demand interrupt-based synchronization. We also analyze the performance of both approaches to synchronization including overhead, accuracy, and error introduced from each approach. By interconnecting the embedded devices’ general purpose IO pins, they can coordinate and synchronize with low overhead, under 50 microseconds for eight processes across four embedded Linux devices. Finally, we demonstrate the usability of our testbed and the differences between both approaches in a power grid control application. 
    more » « less