skip to main content


Title: A Promise Is A Promise: The Effect of Commitment Devices on Computer Security Intentions
Commitment devices are a technique from behavioral economics that have been shown to mitigate the effects of present bias—the tendency to discount future risks and gains in favor of immediate gratifications. In this paper, we explore the feasibility of using commitment devices to nudge users towards complying with varying online security mitigations. Using two online experiments, with over 1,000 participants total, we offered participants the option to be reminded or to schedule security tasks in the future. We find that both reminders and commitment nudges can increase users’ intentions to install security updates and enable two-factor authentication, but not to configure automatic backups. Using qualitative data, we gain insights into the reasons for postponement and how to improve future nudges. We posit that current nudges may not live up to their full potential, as the timing options offered to users may be too rigid.  more » « less
Award ID(s):
1817249
NSF-PAR ID:
10108889
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
CHI Conference onHuman Factors in Computing Systems Proceedings (CHI 2019)
Page Range / eLocation ID:
1 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Knock Codes are a knowledge-based unlock authentication scheme used on LG smartphones where a user enters a code by tapping or "knocking" a sequence on a 2x2 grid. While a lesser-used authentication method, as compared to PINs or Android patterns, there is likely a large number of Knock Code users; we estimate, 700,000--2,500,000 in the US alone. In this paper, we studied Knock Codes security asking participants in an online study to select codes on mobile devices in three settings: a control treatment, a blocklist treatment, and a treatment with a larger, 2x3 grid. We find that Knock Codes are significantly weaker than other deployed authentication, e.g., PINs or Android patterns. In a simulated attacker setting, 2x3 grids offered no additional security. Blocklisting, on the other hand, was more beneficial, making Knock Codes' security similar to Android patterns. Participants expressed positive perceptions of Knock Codes, yet usability was challenged. SUS values were "marginal" or "ok" across treatments. Based on these findings, we recommend deploying blocklists for selecting a Knock Code because they improve security but have a limited impact on usability perceptions. 
    more » « less
  2. Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media. 
    more » « less
  3. Our research aims to highlight and alleviate the complex tensions around online safety, privacy, and smartphone usage in families so that parents and teens can work together to better manage mobile privacy and security-related risks. We developed a mobile application ("app") for Community Oversight of Privacy and Security ("CO-oPS") and had parents and teens assess whether it would be applicable for use with their families. CO-oPS is an Android app that allows a group of users to co-monitor the apps installed on one another's devices and the privacy permissions granted to those apps. We conducted a study with 19 parent-teen (ages 13-17) pairs to understand how they currently managed mobile safety and app privacy within their family and then had them install, use, and evaluate the CO-oPS app. We found that both parents and teens gave little consideration to online safety and privacy before installing new apps or granting privacy permissions. When using CO-oPS, participants liked how the app increased transparency into one another's devices in a way that facilitated communication, but were less inclined to use features for in-app messaging or to hide apps from one another. Key themes related to power imbalances between parents and teens surfaced that made co-management challenging. Parents were more open to collaborative oversight than teens, who felt that it was not their place to monitor their parents, even though both often believed parents lacked the technological expertise to monitor themselves. Our study sheds light on why collaborative practices for managing online safety and privacy within families may be beneficial but also quite difficult to implement in practice. We provide recommendations for overcoming these challenges based on the insights gained from our study. 
    more » « less
  4. null (Ed.)
    The proliferation of the Internet of Things (IoT) has started transforming our lifestyle through automation of home appliances. However, there are users who are hesitant to adopt IoT devices due to various privacy and security concerns. In this paper, we elicit peoples’ attitude and concerns towards adopting IoT devices. We conduct an online survey and collect responses from 232 participants from three different geographic regions (United States, Europe, and India); the participants consist of both adopters and non-adopters of IoT devices. Through data analysis, we determine that there are both similarities and differences in perceptions and concerns between adopters and non-adopters. For example, even though IoT and non-IoT users share similar security and privacy concerns, IoT users are more comfortable using IoT devices in private settings compared to non-IoT users. Furthermore, when comparing users’ attitude and concerns across different geographic regions, we found similarities between participants from the US and Europe, yet participants from India showcased contrasting behavior. For instance, we found that participants from India were more trusting in their government to properly protect consumer data and were more comfortable using IoT devices in a variety of public settings, compared to participants from the US and Europe. Based on our findings, we provide recommendations to reduce users’ concerns in adopting IoT devices, and thereby enhance user trust towards adopting IoT devices. 
    more » « less
  5. Research has shown that trigger-action programming (TAP) is an intuitive way to automate smart home IoT devices, but can also lead to undesirable behaviors. For instance, if two TAP rules have the same trigger condition, but one locks a door while the other unlocks it, the user may believe the door is locked when it is not. Researchers have developed tools to identify buggy or undesirable TAP programs, but little work investigates the usability of the different user-interaction approaches implemented by the various tools. This paper describes an exploratory study of the usability and utility of techniques proposed by TAP security analysis tools. We surveyed 447 Prolific users to evaluate their ability to write declarative policies, identify undesirable patterns in TAP rules (anti-patterns), and correct TAP program errors, as well as to understand whether proposed tools align with users’ needs. We find considerable variation in participants’ success rates writing policies and identifying anti-patterns. For some scenarios over 90% of participants wrote an appropriate policy, while for others nobody was successful. We also find that participants did not necessarily perceive the TAP anti-patterns flagged by tools as undesirable. Our work provides insight into real smart-home users’ goals, highlights the importance of more rigorous evaluation of users’ needs and usability issues when designing TAP security tools, and provides guidance to future tool development and TAP research. 
    more » « less