skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Combating False Data Injection Attacks on Human-Centric Sensing Applications
The recent prevalence of machine learning-based techniques and smart device embedded sensors has enabled widespread human-centric sensing applications. However, these applications are vulnerable to false data injection attacks (FDIA) that alter a portion of the victim's sensory signal with forged data comprising a targeted trait. Such a mixture of forged and valid signals successfully deceives the continuous authentication system (CAS) to accept it as an authentic signal. Simultaneously, introducing a targeted trait in the signal misleads human-centric applications to generate specific targeted inference; that may cause adverse outcomes. This paper evaluates the FDIA's deception efficacy on sensor-based authentication and human-centric sensing applications simultaneously using two modalities - accelerometer, blood volume pulse signals. We identify variations of the FDIA such as different forged signal ratios, smoothed and non-smoothed attack samples. Notably, we present a novel attack detection framework named Siamese-MIL that leverages the Siamese neural networks' generalizable discriminative capability and multiple instance learning paradigms through a unique sensor data representation. Our exhaustive evaluation demonstrates Siamese-MIL's real-time execution capability and high efficacy in different attack variations, sensors, and applications.  more » « less
Award ID(s):
2124285 2526174
PAR ID:
10359528
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
6
Issue:
2
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 22
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Autonomous vehicles (AVs), equipped with numerous sensors such as camera, LiDAR, radar, and ultrasonic sensor, are revolutionizing the transportation industry. These sensors are expected to sense reliable information from a physical environment, facilitating the critical decision-making process of the AVs. Ultrasonic sensors, which detect obstacles in a short distance, play an important role in assisted parking and blind spot detection events. However, due to their weak security level, ultrasonic sensors are particularly vulnerable to signal injection attacks, when the attackers inject malicious acoustic signals to create fake obstacles and intentionally mislead the vehicles to make wrong decisions with disastrous aftermath. In this paper, we systematically analyze the attack model of signal injection attacks toward moving vehicles. By considering the potential threats, we propose SoundFence, a physical-layer defense system which leverages the sensors’ signal processing capability without requiring any additional equipment. SoundFence verifies the benign measurement results and detects signal injection attacks by analyzing sensor readings and the physical-layer signatures of ultrasonic signals. Our experiment with commercial sensors shows that SoundFence detects most (more than 95%) of the abnormal sensor readings with very few false alarms, and it can also accurately distinguish the real echo from injected signals to identify injection attacks. 
    more » « less
  2. Despite the advent of numerous Internet-of-Things (IoT) applications, recent research demonstrates potential side-channel vulnerabilities exploiting sensors which are used for event and environment monitoring. In this paper, we propose a new side-channel attack, where a network of distributed non-acoustic sensors can be exploited by an attacker to launch an eavesdropping attack by reconstructing intelligible speech signals. Specifically, we present PitchIn to demonstrate the feasibility of speech reconstruction from non-acoustic sensor data collected offline across networked devices. Unlike speech reconstruction which requires a high sampling frequency (e.g., > 5 KHz), typical applications using non-acoustic sensors do not rely on richly sampled data, presenting a challenge to the speech reconstruction attack. Hence, PitchIn leverages a distributed form of Time Interleaved Analog-Digital-Conversion (TIADC) to approximate a high sampling frequency, while maintaining low per-node sampling frequency. We demonstrate how distributed TI-ADC can be used to achieve intelligibility by processing an interleaved signal composed of different sensors across networked devices. We implement PitchIn and evaluate reconstructed speech signal intelligibility via user studies. PitchIn has word recognition accuracy as high as 79%. Though some additional work is required to improve accuracy, our results suggest that eavesdropping using a fusion of non-acoustic sensors is a real and practical threat. 
    more » « less
  3. Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn. 
    more » « less
  4. In this work, we present two embedded soft optical waveguide sensors designed for real-time onboard configuration sensing in soft actuators for robotic locomotion. Extending the contributions of our collaborators who employed external camera systems to monitor the gaits of twisted-beam structures, we strategically integrate our OptiGap sensor system into these structures to monitor their dynamic behavior. The system is validated through machine learning models that correlate sensor data with camera-based motion tracking, achieving high accuracy in predicting forward or reverse gaits and validating its capability for real-time sensing. Our second sensor, consisting of a square cross-section fiber pre-twisted to 360 degrees, is designed to detect the chirality of reconfigurable twisted beams. Experimental results confirm the sensor’s effectiveness in capturing variations in light transmittance corresponding to twist angle, serving as a reliable chirality sensor. The successful integration of these sensors not only improves the adaptability of soft robotic systems but also opens avenues for advanced control algorithms. 
    more » « less
  5. null (Ed.)
    Abstract Nanophotonic resonators can confine light to deep-subwavelength volumes with highly enhanced near-field intensity and therefore are widely used for surface-enhanced infrared absorption spectroscopy in various molecular sensing applications. The enhanced signal is mainly contributed by molecules in photonic hot spots, which are regions of a nanophotonic structure with high-field intensity. Therefore, delivery of the majority of, if not all, analyte molecules to hot spots is crucial for fully utilizing the sensing capability of an optical sensor. However, for most optical sensors, simple and straightforward methods of introducing an aqueous analyte to the device, such as applying droplets or spin-coating, cannot achieve targeted delivery of analyte molecules to hot spots. Instead, analyte molecules are usually distributed across the entire device surface, so the majority of the molecules do not experience enhanced field intensity. Here, we present a nanophotonic sensor design with passive molecule trapping functionality. When an analyte solution droplet is introduced to the sensor surface and gradually evaporates, the device structure can effectively trap most precipitated analyte molecules in its hot spots, significantly enhancing the sensor spectral response and sensitivity performance. Specifically, our sensors produce a reflection change of a few percentage points in response to trace amounts of the amino-acid proline or glucose precipitate with a picogram-level mass, which is significantly less than the mass of a molecular monolayer covering the same measurement area. The demonstrated strategy for designing optical sensor structures may also be applied to sensing nano-particles such as exosomes, viruses, and quantum dots. 
    more » « less