Title: Light Ears: Information Leakage via Smart Lights
Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights also create a new attack surface, which can be maliciously used to violate users' privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs, in order to infer users' private data and preferences. The first two attacks are designed to infer users' audio and video playback by a systematic observation and analysis of the multimedia-visualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user's private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms. more »« less
Maiti, Anindya; Jadliwala, Murtuza
(, GetMobile: Mobile Computing and Communications)
null
(Ed.)
Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional bulbs. However, these connected lights also expose a new attack surface, which can be maliciously used to violate users' privacy and security. We design and evaluate novel inference attacks that take advantage of the light emitted by these smart lights to infer sensitive user data and preferences.
Alshehri, Ahmed; Granley, Jacob; Yue, Chuan
(, ACM Conference on Data and Application Security and Privacy)
The number of smart home IoT (Internet of Things) devices has been growing fast in recent years. Along with the great benefits brought by smart home devices, new threats have appeared. One major threat to smart home users is the compromise of their privacy by traffic analysis (TA) attacks. Researchers have shown that TA attacks can be performed successfully on either plain or encrypted traffic to identify smart home devices and infer user activities. Tunneling traffic is a very strong countermeasure to existing TA attacks. However, in this work, we design a Signature based Tunneled Traffic Analysis (STTA) attack that can be effective even on tunneled traffic. Using a popular smart home traffic dataset, we demonstrate that our attack can achieve an 83% accuracy on identifying 14 smart home devices. We further design a simple defense mechanism based on adding uniform random noise to effectively protect against our TA attack without introducing too much overhead. We prove that our defense mechanism achieves approximate differential privacy.
Extensive recent research has shown that it is surprisingly easy to infer Amazon Alexa voice commands over their network traffic data. To prevent these traffic analytics (TA)-based inference attacks, smart home owners are considering deploying virtual private networks (VPNs) to safeguard their smart speakers. In this work, we design a new machine learning-powered attack framework—VoiceAttack that could still accurately fingerprint voice commands on VPN-encrypted voice speaker network traffic. We evaluate VoiceAttack under 5 different real-world settings using Amazon Alexa and Google Home. Our results show that VoiceAttack could correctly infer voice command sentences with a Matthews Correlation Coefficient (MCC) of 0.68 in a closed-world setting and infer voice command categories with an MCC of 0.84 in an open-world setting by eavesdropping VPN-encrypted network traffic data. This presents a significant risk to user privacy and security, as it suggests that external on-path attackers could still potentially intercept and decipher users’ voice commands despite the VPN encryption. We then further examine the sensitivity of voice speaker commands to VoiceAttack. We find that 134 voice speaker commands are highly vulnerable to VoiceAttack. We also present a defense approach—VoiceDefense, which could inject inject appropriate traffic “noise” into voice speaker traffic. And our evaluation results show that VoiceDefense could effectively mitigate VoiceAttack on Amazon Echo and Google Home.
Yu, Keyang; Li, Qi; Chen, Dong; Hu, Liting
(, ACM Transactions on Internet Technology)
Internet of Things (IoT) devices have been increasingly deployed in smart homes to automatically monitor and control their environments. Unfortunately, extensive recent research has shown that on-path external adversaries can infer and further fingerprint people’s sensitive private information by analyzing IoT network traffic traces. In addition, most recent approaches that aim to defend against these malicious IoT traffic analytics cannot adequately protect user privacy with reasonable traffic overhead. In particular, these approaches often did not consider practical traffic reshaping limitations, user daily routine permitting, and user privacy protection preference in their design. To address these issues, we design a new low-cost, open source user-centric defense system—PrivacyGuard—that enables people to regain the privacy leakage control of their IoT devices while still permitting sophisticated IoT data analytics that is necessary for smart home automation. In essence, our approach employs intelligent deep convolutional generative adversarial network assisted IoT device traffic signature learning, long short-term memory based artificial traffic signature injection, and partial traffic reshaping to obfuscate private information that can be observed in IoT device traffic traces. We evaluate PrivacyGuard using IoT network traffic traces of 31 IoT devices from five smart homes and buildings. We find that PrivacyGuard can effectively prevent a wide range of state-of-the-art adversarial machine learning and deep learning based user in-home activity inference and fingerprinting attacks and help users achieve the balance between their IoT data utility and privacy preserving.
Shao, Minglai; Li, Jianxin; Yan, Qiben; Chen, Feng; Huang, Hongyi; Chen, Xunxun
(, IEEE Transactions on Dependable and Secure Computing)
null
(Ed.)
Mobile devices have been an integral part of our everyday lives. Users' increasing interaction with mobile devices brings in significant concerns on various types of potential privacy leakage, among which location privacy draws the most attention. Specifically, mobile users' trajectories constructed by location data may be captured by adversaries to infer sensitive information. In previous studies, differential privacy has been utilized to protect published trajectory data with rigorous privacy guarantee. Strong protection provided by differential privacy distorts the original locations or trajectories using stochastic noise to avoid privacy leakage. In this paper, we propose a novel location inference attack framework, iTracker, which simultaneously recovers multiple trajectories from differentially private trajectory data using the structured sparsity model. Compared with the traditional recovery methods based on single trajectory prediction, iTracker, which takes advantage of the correlation among trajectories discovered by the structured sparsity model, is more effective in recovering multiple private trajectories simultaneously. iTracker successfully attacks the existing privacy protection mechanisms based on differential privacy. We theoretically demonstrate the near-linear runtime of iTracker, and the experimental results using two real-world datasets show that iTracker outperforms existing recovery algorithms in recovering multiple trajectories.
Maiti, Anindya, and Jadliwala, Murtuza. Light Ears: Information Leakage via Smart Lights. Retrieved from https://par.nsf.gov/biblio/10219758. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3.3 Web. doi:10.1145/3351256.
Maiti, Anindya, & Jadliwala, Murtuza. Light Ears: Information Leakage via Smart Lights. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3 (3). Retrieved from https://par.nsf.gov/biblio/10219758. https://doi.org/10.1145/3351256
Maiti, Anindya, and Jadliwala, Murtuza.
"Light Ears: Information Leakage via Smart Lights". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3 (3). Country unknown/Code not available. https://doi.org/10.1145/3351256.https://par.nsf.gov/biblio/10219758.
@article{osti_10219758,
place = {Country unknown/Code not available},
title = {Light Ears: Information Leakage via Smart Lights},
url = {https://par.nsf.gov/biblio/10219758},
DOI = {10.1145/3351256},
abstractNote = {Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights also create a new attack surface, which can be maliciously used to violate users' privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs, in order to infer users' private data and preferences. The first two attacks are designed to infer users' audio and video playback by a systematic observation and analysis of the multimedia-visualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user's private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.},
journal = {Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
volume = {3},
number = {3},
author = {Maiti, Anindya and Jadliwala, Murtuza},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.