skip to main content


Title: SpyCon: Adaptation Based Spyware in Human-in-the-Loop IoT
Personalized IoT adapt their behavior based on contextual information, such as user behavior and location. Unfortunately, the fact that personalized IoT adapt to user context opens a side-channel that leaks private information about the user. To that end, we start by studying the extent to which a malicious eavesdropper can monitor the actions taken by an IoT system and extract user’s private information. In particular, we show two concrete instantiations (in the context of mobile phones and smart homes) of a new category of spyware which we refer to as Context-Aware Adaptation Based Spyware (SpyCon). Experimental evaluations show that the developed SpyCon can predict users’ daily behavior with an accuracy of 90.3%. Being a new spyware with no known prior signature or behavior, traditional spyware detection that is based on code signature or system behavior are not adequate to detect SpyCon. We discuss possible detection and mitigation mechanisms that can hinder the effect of SpyCon.  more » « less
Award ID(s):
1705135
NSF-PAR ID:
10112222
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Workshop on the Internet of Safe Things (SafeThings 2019)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Reinforcement learning (RL) presents numerous benefits compared to rule-based approaches in various applications. Privacy concerns have grown with the widespread use of RL trained with privacy- sensitive data in IoT devices, especially for human-in-the-loop systems. On the one hand, RL methods enhance the user experience by trying to adapt to the highly dynamic nature of humans. On the other hand, trained policies can leak the user’s private information. Recent attention has been drawn to designing privacy-aware RL algorithms while maintaining an acceptable system utility. A central challenge in designing privacy-aware RL, especially for human-in-the-loop systems, is that humans have intrinsic variability, and their preferences and behavior evolve. The effect of one privacy leak mitigation can differ for the same human or across different humans over time. Hence, we can not design one fixed model for privacy-aware RL that fits all. To that end, we propose adaPARL, an adaptive approach for privacy-aware RL, especially for human-in-the-loop IoT systems. adaPARL provides a personalized privacy-utility trade-off depend- ing on human behavior and preference. We validate the proposed adaPARL on two IoT applications, namely (i) Human-in-the-Loop Smart Home and (ii) Human-in-the-Loop Virtual Reality (VR) Smart Classroom. Results obtained on these two applications validate the generality of adaPARL and its ability to provide a personalized privacy-utility trade-off. On average, adaPARL improves the utility by 57% while reducing the privacy leak by 23% on average. 
    more » « less
  2. Vincent Poor and Zhu Han (Ed.)
    Recently, blockchain has received much attention from the mobility-centric Internet of Things (IoT). It is deemed the key to ensuring the built-in integrity of information and security of immutability by design in the peer-to-peer network (P2P) of mobile devices. In a permissioned blockchain, the authority of the system has control over the identities of its users. Such information can allow an ill-intentioned authority to map identities with their spatiotemporal data, which undermines the location privacy of a mobile user. In this paper, we study the location privacy preservation problem in the context of permissioned blockchain-based IoT systems under three conditions. First, the authority of the blockchain holds the public and private key distribution task in the system. Second, there exists a spatiotemporal correlation between consecutive location-based transactions. Third, users communicate with each other through short-range communication technologies such that it constitutes a proof of location (PoL) on their actual locations. We show that, in a permissioned blockchain with an authority and a presence of a PoL, existing approaches cannot be applied using a plug-and-play approach to protect location privacy. In this context, we propose BlockPriv, an obfuscation technique that quantifies, both theoretically and experimentally, the relationship between privacy and utility in order to dynamically protect the privacy of sensitive locations in the permissioned blockchain. 
    more » « less
  3. As the digital world gets increasingly ingrained in our daily lives, cyberattacks—especially those involving malware—are growing more complex and common, which calls for developing innovative safeguards. Keylogger spyware, which combines keylogging and spyware functionalities, is one of the most insidious types of cyberattacks. This malicious software stealthily monitors and records user keystrokes, amassing sensitive data, such as passwords and confidential personal information, which can then be exploited. This research introduces a novel browser extension designed to effectively thwart keylogger spyware attacks. The extension is underpinned by a cutting-edge algorithm that meticulously analyzes input-related processes, promptly identifying and flagging any malicious activities. Upon detection, the extension empowers users with the immediate choice to terminate the suspicious process or validate its authenticity, thereby placing crucial real-time control in the hands of the end user. The methodology used guarantees the extension's mobility and adaptability across various platforms and devices. This paper extensively details the development of the browser extension, from its first conceptual design to its rigorous performance evaluation. The results show that the extension considerably strengthens end-user protection against cyber risks, resulting in a safer web browsing experience. The research substantiates the extension's efficacy and significant potential in reinforcing online security standards, demonstrating its ability to make web surfing safer through extensive analysis and testing. 
    more » « less
  4. We present and analyze UDM, a new protocol for user discovery in anonymous communication systems that minimizes the information disclosed to the system and users. Unlike existing systems, including those based on private set intersection, UDM learns nothing about the contact lists and social graphs of the users, is not vulnerable to off-line dictionary attacks that expose contact lists, does not reveal platform identifiers to users without the owner’s explicit permission, and enjoys low computation and communication complexity. UDM solves the following user-discovery problem. User Alice wishes to communicate with Bob over an anonymous communication system, such as cMix or Tor. Initially, each party knows each other’s public contact identifier (e.g., email address or phone number), but neither knows the other’s private platform identifier in the communication system. If both parties wish to communicate with each other, UDM enables them to establish a shared key and learn each other’s private platform identifier. UDM uses an untrusted user-discovery system, which processes and stores only public information, hashed values, or values encrypted with keys it does not know. Therefore, UDM cannot learn any information about the social graphs of its users. Using the anonymous communication system, each pair of users who wish to communicate with each other uploads to the user-discovery system their private platform identifier, encrypted with their shared key. Indexing their request by a truncated cryptographic hash of their shared key, each user can then download each other’s encrypted private platform identifier. 
    more » « less
  5. null (Ed.)
    The standard pipeline for many vision tasks uses a conventional camera to capture an image that is then passed to a digital processor for information extraction. In some deployments, such as private locations, the captured digital imagery contains sensitive information exposed to digital vulnerabilities such as spyware, Trojans, etc. However, in many applications, the full imagery is unnecessary for the vision task at hand. In this paper we propose an optical and analog system that preprocesses the light from the scene before it reaches the digital imager to destroy sensitive information. We explore analog and optical encodings consisting of easily implementable operations such as convolution, pooling, and quantization. We perform a case study to evaluate how such encodings can destroy face identity information while preserving enough information for face detection. The encoding parameters are learned via an alternating optimization scheme based on adversarial learning with deep neural networks. We name our system CAnOPIC (Camera with Analog and Optical Privacy-Integrating Computations) and show that it has better performance in terms of both privacy and utility than conventional optical privacy-enhancing methods such as blurring and pixelation. 
    more » « less