skip to main content


Title: PowerCut and Obfuscator: An Exploration of the Design Space for Privacy-Preserving Interventions for Smart Speakers
The pervasive use of smart speakers has raised numerous privacy concerns. While work to date provides an understanding of user perceptions of these threats, limited research focuses on how we can mitigate these concerns, either through redesigning the smart speaker or through dedicated privacy-preserving interventions. In this paper, we present the design and prototyping of two privacy-preserving interventions: 'Obfuscator' targeted at disabling recording at the microphones, and 'PowerCut' targeted at disabling power to the smart speaker. We present our findings from a technology probe study involving 24 households that interacted with our prototypes; the primary objective was to gain a better understanding of the design space for technological interventions that might address these concerns. Our data and findings reveal complex trade-offs among utility, privacy, and usability and stresses the importance of multi-functionality, aesthetics, ease-of-use, and form factor. We discuss the implications of our findings for the development of subsequent interventions and the future design of smart speakers.  more » « less
Award ID(s):
1838733 2003129 1942014
NSF-PAR ID:
10299750
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Smart speaker voice assistants (VAs) such as Amazon Echo and Google Home have been widely adopted due to their seamless integration with smart home devices and the Internet of Things (IoT) technologies. These VA services raise privacy concerns, especially due to their access to our speech. This work considers one such use case: the unaccountable and unauthorized surveillance of a user's emotion via speech emotion recognition (SER). This paper presents DARE-GP, a solution that creates additive noise to mask users' emotional information while preserving the transcription-relevant portions of their speech. DARE-GP does this by using a constrained genetic programming approach to learn the spectral frequency traits that depict target users' emotional content, and then generating a universal adversarial audio perturbation that provides this privacy protection. Unlike existing works, DARE-GP provides: a) real-time protection of previously unheard utterances, b) against previously unseen black-box SER classifiers, c) while protecting speech transcription, and d) does so in a realistic, acoustic environment. Further, this evasion is robust against defenses employed by a knowledgeable adversary. The evaluations in this work culminate with acoustic evaluations against two off-the-shelf commercial smart speakers using a small-form-factor (raspberry pi) integrated with a wake-word system to evaluate the efficacy of its real-world, real-time deployment.

     
    more » « less
  2. Abstract As devices with always-on microphones located in people’s homes, smart speakers have significant privacy implications. We surveyed smart speaker owners about their beliefs, attitudes, and concerns about the recordings that are made and shared by their devices. To ground participants’ responses in concrete interactions, rather than collecting their opinions abstractly, we framed our survey around randomly selected recordings of saved interactions with their devices. We surveyed 116 owners of Amazon and Google smart speakers and found that almost half did not know that their recordings were being permanently stored and that they could review them; only a quarter reported reviewing interactions, and very few had ever deleted any. While participants did not consider their own recordings especially sensitive, they were more protective of others’ recordings (such as children and guests) and were strongly opposed to use of their data by third parties or for advertising. They also considered permanent retention, the status quo, unsatisfactory. Based on our findings, we make recommendations for more agreeable data retention policies and future privacy controls. 
    more » « less
  3. Smart voice assistants such as Amazon Alexa and Google Home are becoming increasingly pervasive in our everyday environments. Despite their benefits, their miniaturized and embedded cameras and microphones raise important privacy concerns related to surveillance and eavesdropping. Recent work on the privacy concerns of people in the vicinity of these devices has highlighted the need for 'tangible privacy', where control and feedback mechanisms can provide a more assured sense of whether the camera or microphone is 'on' or 'off'. However, current designs of these devices lack adequate mechanisms to provide such assurances. To address this gap in the design of smart voice assistants, especially in the case of disabling microphones, we evaluate several designs that incorporate (or not) tangible control and feedback mechanisms. By comparing people's perceptions of risk, trust, reliability, usability, and control for these designs in a between-subjects online experiment (N=261), we find that devices with tangible built-in physical controls are perceived as more trustworthy and usable than those with non-tangible mechanisms. Our findings present an approach for tangible, assured privacy especially in the context of embedded microphones.

     
    more » « less
  4. Background

    Mobile mental health systems (MMHS) have been increasingly developed and deployed in support of monitoring, management, and intervention with regard to patients with mental disorders. However, many of these systems rely on patient data collected by smartphones or other wearable devices to infer patients’ mental status, which raises privacy concerns. Such a value-privacy paradox poses significant challenges to patients’ adoption and use of MMHS; yet, there has been limited understanding of it.

    Objective

    To address the significant literature gap, this research aims to investigate both the antecedents of patients’ privacy concerns and the effects of privacy concerns on their continuous usage intention with regard to MMHS.

    Methods

    Using a web-based survey, this research collected data from 170 participants with MMHS experience recruited from online mental health communities and a university community. The data analyses used both repeated analysis of variance and partial least squares regression.

    Results

    The results showed that data type (P=.003), data stage (P<.001), privacy victimization experience (P=.01), and privacy awareness (P=.08) have positive effects on privacy concerns. Specifically, users report higher privacy concerns for social interaction data (P=.007) and self-reported data (P=.001) than for biometrics data; privacy concerns are higher for data transmission (P=.01) and data sharing (P<.001) than for data collection. Our results also reveal that privacy concerns have an effect on attitude toward privacy protection (P=.001), which in turn affects continuous usage intention with regard to MMHS.

    Conclusions

    This study contributes to the literature by deepening our understanding of the data value-privacy paradox in MMHS research. The findings offer practical guidelines for breaking the paradox through the design of user-centered and privacy-preserving MMHS.

     
    more » « less
  5. Abstract

    Recent advancements in artificial intelligence (AI) have seen the emergence of smart video surveillance (SVS) in many practical applications, particularly for building safer and more secure communities in our urban environments. Cognitive tasks, such as identifying objects, recognizing actions, and detecting anomalous behaviors, can produce data capable of providing valuable insights to the community through statistical and analytical tools. However, artificially intelligent surveillance systems design requires special considerations for ethical challenges and concerns. The use and storage of personally identifiable information (PII) commonly pose an increased risk to personal privacy. To address these issues, this paper identifies the privacy concerns and requirements needed to address when designing AI-enabled smart video surveillance. Further, we propose the first end-to-end AI-enabled privacy-preserving smart video surveillance system that holistically combines computer vision analytics, statistical data analytics, cloud-native services, and end-user applications. Finally, we propose quantitative and qualitative metrics to evaluate intelligent video surveillance systems. The system shows the 17.8 frame-per-second (FPS) processing in extreme video scenes. However, considering privacy in designing such a system results in preferring the pose-based algorithm to the pixel-based one. This choice resulted in dropping accuracy in both action and anomaly detection tasks. The results drop from 97.48% to 73.72% in anomaly detection and 96% to 83.07% in the action detection task. On average, the latency of the end-to-end system is 36.1 seconds.

     
    more » « less