skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Unifying Threats against Information Integrity in Participatory Crowd Sensing
This article proposes a unified threat landscape for Participatory Crowd Sensing (P-CS) systems. Specifically, it focuses on attacks from organized malicious actors that may use the knowledge of P-CS platform's operations and exploit algorithmic weaknesses in AI-based methods of event trust, user reputation, decision-making or recommendation models deployed to preserve information integrity in P-CS. We emphasize on intent driven malicious behaviors by advanced adversaries and how attacks are crafted to achieve those attack impacts. Three directions of the threat model are introduced, such as attack goals, types, and strategies. We expand on how various strategies are linked with different attack types and goals, underscoring formal definition, their relevance and impact on the P-CS platform.  more » « less
Award ID(s):
2030611 2017289
PAR ID:
10434255
Author(s) / Creator(s):
;
Editor(s):
Kawsar, Fahim
Date Published:
Journal Name:
IEEE pervasive computing
ISSN:
1536-1268
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Emerging technologies drive the ongoing transformation of Intelligent Transportation Systems (ITS). This transformation has given rise to cybersecurity concerns, among which data poisoning attack emerges as a new threat as ITS increasingly relies on data. In data poisoning attacks, attackers inject malicious perturbations into datasets, potentially leading to inaccurate results in offline learning and real-time decision-making processes. This paper concentrates on data poisoning attack models against ITS. We identify the main ITS data sources vulnerable to poisoning attacks and application scenarios that enable staging such attacks. A general framework is developed following rigorous study process from cybersecurity but also considering specific ITS application needs. Data poisoning attacks against ITS are reviewed and categorized following the framework. We then discuss the current limitations of these attack models and the future research directions. Our work can serve as a guideline to better understand the threat of data poisoning attacks against ITS applications, while also giving a perspective on the future development of trustworthy ITS. Emerging technologies drive the ongoing transformation of Intelligent Transportation Systems (ITS). This transformation has given rise to cybersecurity concerns, among which data poisoning attack emerges as a new threat as ITS increasingly relies on data. In data poisoning attacks, attackers inject malicious perturbations into datasets, potentially leading to inaccurate results in offline learning and real-time decision-making processes. This paper concentrates on data poisoning attack models against ITS. We identify the main ITS data sources vulnerable to poisoning attacks and application scenarios that enable staging such attacks. A general framework is developed following rigorous study process from cybersecurity but also considering specific ITS application needs. Data poisoning attacks against ITS are reviewed and categorized following the framework. We then discuss the current limitations of these attack models and the future research directions. Our work can serve as a guideline to better understand the threat of data poisoning attacks against ITS applications, while also giving a perspective on the future development of trustworthy ITS. 
    more » « less
  2. With the increasing popularity of containerized applications, container registries have hosted millions of repositories that allow developers to store, manage, and share their software. Unfortunately, they have also become a hotbed for adversaries to spread malicious images to the public. In this paper, we present the first in-depth study on the vulnerability of container registries to typosquatting attacks, in which adversaries intentionally upload malicious images with an identification similar to that of a benign image so that users may accidentally download malicious images due to typos. We demonstrate that such typosquatting attacks could pose a serious security threat in both public and private registries as well as across multiple platforms. To shed light on the container registry typosquatting threat, we first conduct a measurement study and a 210-day proof-of-concept exploitation on public container registries, revealing that human users indeed make random typos and download unwanted container images. We also systematically investigate attack vectors on private registries and reveal that its naming space is open and could be easily exploited for launching a typosquatting attack. In addition, for a typosquatting attack across multiple platforms, we demonstrate that adversaries can easily self-host malicious registries or exploit existing container registries to manipulate repositories with similar identifications. Finally, we propose CRYSTAL, a lightweight extension to existing image management, which effectively defends against typosquatting attacks from both container users and registries. 
    more » « less
  3. Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable definitions of optimality. In this paper, we propose a systematic approach to characterize worst-case (i.e., optimal) adversaries. We first introduce an extensible decomposition of attacks in adversarial machine learning by atomizing attack components into surfaces and travelers. With our decomposition, we enumerate over components to create 576 attacks (568 of which were previously unexplored). Next, we propose the Pareto Ensemble Attack (PEA): a theoretical attack that upper-bounds attack performance. With our new attacks, we measure performance relative to the PEA on: both robust and non-robust models, seven datasets, and three extended ℓp-based threat models incorporating compute costs, formalizing the Space of Adversarial Strategies. From our evaluation we find that attack performance to be highly contextual: the domain, model robustness, and threat model can have a profound influence on attack efficacy. Our investigation suggests that future studies measuring the security of machine learning should: (1) be contextualized to the domain & threat models, and (2) go beyond the handful of known attacks used today. 
    more » « less
  4. Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable definitions of optimality. In this paper, we propose a systematic approach to characterize worst-case (i.e., optimal) adversaries. We first introduce an extensible decomposition of attacks in adversarial machine learning by atomizing attack components into surfaces and travelers. With our decomposition, we enumerate over components to create 576 attacks (568 of which were previously unexplored). Next, we propose the Pareto Ensemble Attack (PEA): a theoretical attack that upper-bounds attack performance. With our new attacks, we measure performance relative to the PEA on: both robust and non-robust models, seven datasets, and three extended `pbased threat models incorporating compute costs, formalizing the Space of Adversarial Strategies. From our evaluation we find that attack performance to be highly contextual: the domain, model robustness, and threat model can have a profound influence on attack efficacy. Our investigation suggests that future studies measuring the security of machine learning should: (1) be contextualized to the domain & threat models, and (2) go beyond the handful of known attacks used today. 
    more » « less
  5. False power consumption data injected from compromised smart meters in Advanced Metering Infrastructure (AMI) of smart grids is a threat that negatively affects both customers and utilities. In particular, organized and stealthy adversaries can launch various types of data falsification attacks from multiple meters using smart or persistent strategies. In this paper, we propose a real time, two tier attack detection scheme to detect orchestrated data falsification under a sophisticated threat model in decentralized micro-grids. The first detection tier monitors whether the Harmonic to Arithmetic Mean Ratio of aggregated daily power consumption data is outside a normal range known as safe margin. To confirm whether discrepancies in the first detection tier is indeed an attack, the second detection tier monitors the sum of the residuals (difference) between the proposed ratio metric and the safe margin over a frame of multiple days. If the sum of residuals is beyond a standard limit range, the presence of a data falsification attack is confirmed. Both the ‘safe margins’ and the ‘standard limits’ are designed through a ‘system identification phase’, where the signature of proposed metrics under normal conditions are studied using real AMI micro-grid data sets from two different countries over multiple years. Subsequently, we show how the proposed metrics trigger unique signatures under various attacks which aids in attack reconstruction and also limit the impact of persistent attacks. Unlike metrics such as CUSUM or EWMA, the stability of the proposed metrics under normal conditions allows successful real time detection of various stealthy attacks with ultra-low false alarms. 
    more » « less