The Amazon Alexa voice assistant provides convenience through automation and control of smart home appliances using voice commands. Amazon allows third-party applications known as skills to run on top of Alexa to further extend Alexa's capability. However, as multiple skills can share the same invocation phrase and request access to sensitive user data, growing security and privacy concerns surround third-party skills. In this paper, we study the availability and effectiveness of existing security indicators or a lack thereof to help users properly comprehend the risk of interacting with different types of skills. We conduct an interactive user study (inviting active users of Amazon Alexa) where participants listen to and interact with real-world skills using the official Alexa app. We find that most participants fail to identify the skill developer correctly (i.e., they assume Amazon also develops the third-party skills) and cannot correctly determine which skills will be automatically activated through the voice interface. We also propose and evaluate a few voice-based skill type indicators, showcasing how users would benefit from such voice-based indicators.
more »
« less
Why Am I Seeing Double? An Investigation of Device Management Flaws in Voice Assistant Platforms
In Voice Assistant (VA) platforms, when users add devices to their accounts and give voice commands, complex interactions occur between the devices, skills, VA clouds, and vendor clouds. These interactions are governed by the device management capabilities (DMC) of VA platforms, which rely on device names, types, and associated skills in the user account. Prior work studied vulnerabilities in specific VA components, such as hidden voice commands and bypassing skill vetting. However, the security and privacy implications of device management flaws have largely been unexplored. In this paper, we introduce DMC-Xplorer, a testing framework for the automated discovery of VA device management flaws. We first introduce VA description language (VDL), a new domain-specific language to create VA environments for testing, using VA and skill developer APIs. DMC-Xplorer then selects VA parameters (device names, types, vendors, actions, and skills) in a combinatorial approach and creates VA environments with VDL. It issues real voice commands to the environment via developer APIs and logs event traces. It validates the traces against three formal security properties that define the secure operation of VA platforms. Lastly, DMC-Xplorer identifies the root cause of property violations through intervention analysis to identify VA device management flaws. We exercised DMC-Xplorer on Amazon Alexa and Google Home and discovered two design flaws that can be exploited to launch four attacks. We show that malicious skills with default permissions can eavesdrop on privacy-sensitive device states, prevent users from controlling their devices, and disrupt the services on the VA cloud.
more »
« less
- Award ID(s):
- 2145744
- PAR ID:
- 10660666
- Publisher / Repository:
- The annual Privacy Enhancing Technologies Symposium (PETS)
- Date Published:
- Journal Name:
- Proceedings on Privacy Enhancing Technologies
- Volume:
- 2025
- Issue:
- 2
- ISSN:
- 2299-0984
- Page Range / eLocation ID:
- 719 to 733
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Amazon's voice-based assistant, Alexa, enables users to directly interact with various web services through natural language dialogues. It provides developers with the option to create third-party applications (known as Skills) to run on top of Alexa. While such applications ease users' interaction with smart devices and bolster a number of additional services, they also raise security and privacy concerns due to the personal setting they operate in. This paper aims to perform a systematic analysis of the Alexa skill ecosystem. We perform the first large-scale analysis of Alexa skills, obtained from seven different skill stores totaling to 90,194 unique skills. Our analysis reveals several limitations that exist in the current skill vetting process. We show that not only can a malicious user publish a skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information. We, next, formalize the different skill-squatting techniques and evaluate the efficacy of such techniques. We find that while certain approaches are more favorable than others, there is no substantial abuse of skill squatting in the real world. Lastly, we study the prevalence of privacy policies across different categories of skill, and more importantly the policy content of skills that use the Alexa permission model to access sensitive user data. We find that around 23.3% of such skills do not fully disclose the data types associated with the permissions requested. We conclude by providing some suggestions for strengthening the overall ecosystem, and thereby enhance transparency for end-users.more » « less
-
Many Internet of Things devices have voice user interfaces. One of the most popular voice user interfaces is Amazon’s Alexa, which supports more than 50,000 third-party applications (“skills”). We study how Alexa’s integration of these skills may confuse users. Our survey of 237 participants found that users do not understand that skills are often operated by third parties, that they often confuse third-party skills with native Alexa functions, and that they are unaware of the functions that the native Alexa system supports. Surprisingly, users who interact with Alexa more frequently are more likely to conclude that a third-party skill is a native Alexa function. The potential for misunderstanding creates new security and privacy risks: attackers can develop third-party skills that operate without users’ knowledge or masquerade as native Alexa functions. To mitigate this threat, we make design recommendations to help users better distinguish native functionality and third-party skills, including audio and visual indicators of native and third-party contexts, as well as a consistent design standard to help users learn what functions are and are not possible on Alexa.more » « less
-
Voice assistants are becoming increasingly pervasive due to the convenience and automation they provide through the voice interface. However, such convenience often comes with unforeseen security and privacy risks. For example, encrypted traffic from voice assistants can leak sensitive information about their users' habits and lifestyles. In this paper, we present a taxonomy of fingerprinting voice commands on the most popular voice assistant platforms (Google, Alexa, and Siri). We also provide a deeper understanding of the feasibility of fingerprinting third-party applications and streaming services over the voice interface. Our analysis not only improves the state-of-the-art technique but also studies a more realistic setup for fingerprinting voice activities over encrypted traffic.Our proposed technique considers a passive network eavesdropper observing encrypted traffic from various devices within a home and, therefore, first detects the invocation/activation of voice assistants followed by what specific voice command is issued. Using an end-to-end system design, we show that it is possible to detect when a voice assistant is activated with 99% accuracy and then utilize the subsequent traffic pattern to infer more fine-grained user activities with around 77-80% accuracy.more » « less
-
In the smart home landscape, there is an increasing trend of homeowners sharing device access outside their homes. This practice presents unique challenges in terms of security and privacy. In this study, we evaluated the co-management features in smart home management systems to investigate 1) how homeowners establish and authenticate shared users’ access, 2) the access control mechanisms, and 3) the management, monitoring, and revocation of access for shared devices. We conducted a systematic feature analysis of 11 Android and iOS mobile applications (“apps”) and 2 open-source platforms designed for smart home management. Our study revealed that most smart home systems adopt a centralized control model which necessitates shared users to utilize the primary app for device access, while providing diverse sharing mechanisms, such as email or phone invitations and unique codes, each presenting distinct security and privacy advantages. Moreover, we discovered a variety of access control options, ranging from full access to granular access control such as time-based restrictions which, while enhancing security and convenience, necessitate careful management to avoid user confusion. Additionally, our findings highlighted the prevalence of comprehensive methods for monitoring shared users’ access, with most systems providing detailed logs for added transparency and security, although there are some restrictions to safeguard homeowner privacy. Based on our findings, we recommend enhanced access control features to improve user experience in shared settings.more » « less
An official website of the United States government

