skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Combating False Data Injection Attacks on Human-Centric Sensing Applications
The recent prevalence of machine learning-based techniques and smart device embedded sensors has enabled widespread human-centric sensing applications. However, these applications are vulnerable to false data injection attacks (FDIA) that alter a portion of the victim's sensory signal with forged data comprising a targeted trait. Such a mixture of forged and valid signals successfully deceives the continuous authentication system (CAS) to accept it as an authentic signal. Simultaneously, introducing a targeted trait in the signal misleads human-centric applications to generate specific targeted inference; that may cause adverse outcomes. This paper evaluates the FDIA's deception efficacy on sensor-based authentication and human-centric sensing applications simultaneously using two modalities - accelerometer, blood volume pulse signals. We identify variations of the FDIA such as different forged signal ratios, smoothed and non-smoothed attack samples. Notably, we present a novel attack detection framework named Siamese-MIL that leverages the Siamese neural networks' generalizable discriminative capability and multiple instance learning paradigms through a unique sensor data representation. Our exhaustive evaluation demonstrates Siamese-MIL's real-time execution capability and high efficacy in different attack variations, sensors, and applications.  more » « less
Award ID(s):
2124285
PAR ID:
10602986
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Association for Computing Machinery (ACM)
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
6
Issue:
2
ISSN:
2474-9567
Format(s):
Medium: X Size: p. 1-22
Size(s):
p. 1-22
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Highly sensitive and multimodal sensors have recently emerged for a wide range of applications, including epidermal electronics, robotics, health‐monitoring devices and human–machine interfaces. However, cross‐sensitivity prevents accurate measurements of the target input signals when a multiple of them are simultaneously present. Therefore, the selection of the multifunctional materials and the design of the sensor structures play a significant role in multimodal sensors with decoupled sensing mechanisms. Hence, this review article introduces varying methods to decouple different input signals for realizing truly multimodal sensors. Early efforts explore different outputs to distinguish the corresponding input signals applied to the sensor in sequence. Next, this study discusses the methods for the suppression of the interference, signal correction, and various decoupling strategies based on different outputs to simultaneously detect multiple inputs. The recent insights into the materials' properties, structure effects, and sensing mechanisms in recognition of different input signals are highlighted. The presence of the various decoupling methods also helps avoid the use of complicated signal processing steps and allows multimodal sensors with high accuracy for applications in bioelectronics, robotics, and human–machine interfaces. Finally, current challenges and potential opportunities are discussed in order to motivate future technological breakthroughs. 
    more » « less
  2. null (Ed.)
    Autonomous vehicles (AVs), equipped with numerous sensors such as camera, LiDAR, radar, and ultrasonic sensor, are revolutionizing the transportation industry. These sensors are expected to sense reliable information from a physical environment, facilitating the critical decision-making process of the AVs. Ultrasonic sensors, which detect obstacles in a short distance, play an important role in assisted parking and blind spot detection events. However, due to their weak security level, ultrasonic sensors are particularly vulnerable to signal injection attacks, when the attackers inject malicious acoustic signals to create fake obstacles and intentionally mislead the vehicles to make wrong decisions with disastrous aftermath. In this paper, we systematically analyze the attack model of signal injection attacks toward moving vehicles. By considering the potential threats, we propose SoundFence, a physical-layer defense system which leverages the sensors’ signal processing capability without requiring any additional equipment. SoundFence verifies the benign measurement results and detects signal injection attacks by analyzing sensor readings and the physical-layer signatures of ultrasonic signals. Our experiment with commercial sensors shows that SoundFence detects most (more than 95%) of the abnormal sensor readings with very few false alarms, and it can also accurately distinguish the real echo from injected signals to identify injection attacks. 
    more » « less
  3. Despite the advent of numerous Internet-of-Things (IoT) applications, recent research demonstrates potential side-channel vulnerabilities exploiting sensors which are used for event and environment monitoring. In this paper, we propose a new side-channel attack, where a network of distributed non-acoustic sensors can be exploited by an attacker to launch an eavesdropping attack by reconstructing intelligible speech signals. Specifically, we present PitchIn to demonstrate the feasibility of speech reconstruction from non-acoustic sensor data collected offline across networked devices. Unlike speech reconstruction which requires a high sampling frequency (e.g., > 5 KHz), typical applications using non-acoustic sensors do not rely on richly sampled data, presenting a challenge to the speech reconstruction attack. Hence, PitchIn leverages a distributed form of Time Interleaved Analog-Digital-Conversion (TIADC) to approximate a high sampling frequency, while maintaining low per-node sampling frequency. We demonstrate how distributed TI-ADC can be used to achieve intelligibility by processing an interleaved signal composed of different sensors across networked devices. We implement PitchIn and evaluate reconstructed speech signal intelligibility via user studies. PitchIn has word recognition accuracy as high as 79%. Though some additional work is required to improve accuracy, our results suggest that eavesdropping using a fusion of non-acoustic sensors is a real and practical threat. 
    more » « less
  4. Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn. 
    more » « less
  5. In this work, we present two embedded soft optical waveguide sensors designed for real-time onboard configuration sensing in soft actuators for robotic locomotion. Extending the contributions of our collaborators who employed external camera systems to monitor the gaits of twisted-beam structures, we strategically integrate our OptiGap sensor system into these structures to monitor their dynamic behavior. The system is validated through machine learning models that correlate sensor data with camera-based motion tracking, achieving high accuracy in predicting forward or reverse gaits and validating its capability for real-time sensing. Our second sensor, consisting of a square cross-section fiber pre-twisted to 360 degrees, is designed to detect the chirality of reconfigurable twisted beams. Experimental results confirm the sensor’s effectiveness in capturing variations in light transmittance corresponding to twist angle, serving as a reliable chirality sensor. The successful integration of these sensors not only improves the adaptability of soft robotic systems but also opens avenues for advanced control algorithms. 
    more » « less