skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Runtime Permissions for Privacy in Proactive Intelligent Assistants
Intelligent voice assistants may soon become proactive, offering suggestions without being directly invoked. Such behavior increases privacy risks, since proactive operation requires continuous monitoring of conversations. To mitigate this problem, our study proposes and evaluates one potential privacy control, in which the assistant requests permission for the information it wishes to use immediately after hearing it. To find out how people would react to runtime permission requests, we recruited 23 pairs of participants to hold conversations while receiving ambient suggestions from a proactive assistant, which we simulated in real time using the Wizard of Oz technique. The interactive sessions featured different modes and designs of runtime permission requests and were followed by in-depth interviews about people's preferences and concerns. Most participants were excited about the devices despite their continuous listening, but wanted control over the assistant's actions and their own data. They generally prioritized an interruption-free experience above more fine-grained control over what the device would hear.  more » « less
Award ID(s):
1801501
PAR ID:
10353647
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent advances in Natural Language Interfaces (NLIs) and Large Language Models (LLMs) have transformed the way we tackle NLP tasks, shifting the focus towards a more Pragmatics-based perspective. This shift enables more natural interactions between humans and voice assistants, which have historically been difficult to achieve. Pragmatics involves understanding how users often speak out of turn, interrupt one another, or provide relevant information without being explicitly asked (maxim of quantity). To explore this, we developed a digital assistant that continuously listens to conversations and proactively generates relevant visualizations during data exploration tasks. In a within-subject study, participants interacted with both proactive and non-proactive versions of a voice assistant while exploring the Hawaii Climate Data Portal (HCDP). Results suggest that interaction with the proactive assistant increased the total number of utterances and discoveries, facilitated quicker and more reliable insights, and led to greater usage of the system’s chart capabilities. Our study highlights the potential of proactive AI in NLIs and identifies key challenges in its implementation, offering insights for future research. 
    more » « less
  2. We conducted a user study with 380 Android users, profiling them according to two key privacy behaviors: the number of apps installed, and the Dangerous permissions granted to those apps. We identified four unique privacy profiles: 1) Privacy Balancers (49.74% of participants), 2) Permission Limiters (28.68%), 3) App Limiters (14.74%), and 4) the Privacy Unconcerned (6.84%). App and Permission Limiters were significantly more concerned about perceived surveillance than Privacy Balancers and the Privacy Unconcerned. App Limiters had the lowest number of apps installed on their devices with the lowest intention of using apps and sharing information with them, compared to Permission Limiters who had the highest number of apps installed and reported higher intention to share information with apps. The four profiles reflect the differing privacy management strategies, perceptions, and intentions of Android users that go beyond the binary decision to share or withhold information via mobile apps. 
    more » « less
  3. As data privacy continues to be a crucial human-right concern as recognized by the UN, regulatory agencies have demanded developers obtain user permission before accessing user-sensitive data. Mainly through the use of privacy policies statements, developers fulfill their legal requirements to keep users abreast of the requests for their data. In addition, platforms such as Android enforces explicit permission request using the permission model. Nonetheless, recent research has shown that service providers hardly make full disclosure when requesting data in these statements. Neither is the current permission model designed to provide adequate informed consent. Often users have no clear understanding of the reason and scope of usage of the data request. This paper proposes an unambiguous, informed consent process that provides developers with a standardized method for declaring Intent. Our proposed Intent-aware permission architecture extends the current Android permission model with a precise mechanism for full disclosure of purpose and scope limitation. The design of which is based on an ontology study of data requests purposes. The overarching objective of this model is to ensure end-users are adequately informed before making decisions on their data. Additionally, this model has the potential to improve trust between end-users and developers. 
    more » « less
  4. Interdependent privacy (IDP) violations occur when users share personal information about others without permission, resulting in potential embarrassment, reputation loss, or harassment. There are several strategies that can be applied to protect IDP, but little is known regarding how social media users perceive IDP threats or how they prefer to respond to them. We utilized a mixed-method approach with a replication study to examine user beliefs about various government-, platform-, and user-level strategies for managing IDP violations. Participants reported that IDP represented a 'serious' online threat, and identified themselves as primarily responsible for responding to violations. IDP strategies that felt more familiar and provided greater perceived control over violations (e.g., flagging, blocking, unfriending) were rated as more effective than platform or government driven interventions. Furthermore, we found users were more willing to share on social media if they perceived their interactions as protected. Findings are discussed in relation to control paradox theory. 
    more » « less
  5. null (Ed.)
    Objective To understand how aspects of vishing calls (phishing phone calls) influence perceived visher honesty. Background Little is understood about how targeted individuals behave during vishing attacks. According to truth-default theory, people assume others are being honest until something triggers their suspicion. We investigated whether that was true during vishing attacks. Methods Twenty-four participants read written descriptions of eight real-world vishing calls. Half included highly sensitive requests; the remainder included seemingly innocuous requests. Participants rated visher honesty at multiple points during conversations. Results Participants initially perceived vishers to be honest. Honesty ratings decreased before requests occurred. Honesty ratings decreased further in response to highly sensitive requests, but not seemingly innocuous requests. Honesty ratings recovered somewhat, but only after highly sensitive requests. Conclusions The present results revealed five important insights: (1) people begin vishing conversations in the truth-default state, (2) certain aspects of vishing conversations serve as triggers, (3) other aspects of vishing conversations do not serve as triggers, (4) in certain situations, people’s perceptions of visher honesty improve, and, more generally, (5) truth-default theory may be a useful tool for understanding how targeted individuals behave during vishing attacks. Application Those developing systems that help users deal with suspected vishing attacks or penetration testing plans should consider (1) targeted individuals’ truth-bias, (2) the influence of visher demeanor on the likelihood of deception detection, (3) the influence of fabricated situations surrounding vishing requests on the likelihood of deception detection, and (4) targeted individuals’ lack of concern about seemingly innocuous requests. 
    more » « less