Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Graph Neural Networks (GNNs) are deep learning models designed to address the complexities of graph-structured, non-Euclidean data. Due to their complexity, knowledge distillation (KD) is often employed to transfer knowledge from a GNN to a simpler, more efficient student model, such as a Multi-Layer Perceptron (MLP), enabling deployment in large-scale industrial applications. However, KD can inadvertently leak sensitive information from the teacher to the student, posing significant privacy risks. We present the first membership inference attacks targeting GNNs in KD pipeline, showing that student MLPs can reveal whether a node appeared in the teacher’s training data. Our attacks operate in a black-box setting, requiring access only to the student outputs, and remain effective in cross-dataset scenarios. Experimental evaluations across four GNN models and eight datasets show the effectiveness of our approach, achieving up to 0.9014 precision under low FPR of 1% in cross-dataset settings. These results expose significant vulnerabilities in GNN-based KD frameworks, emphasizing the need for strong security measures during the KD process involving GNNs.more » « less
-
Autonomous vehicles rely on deep neural networks (DNNs) for traffic sign recognition, lane centering, and vehicle detection, yet these models are vulnerable to attacks that induce misclassification and threaten safety. Existing defenses (e.g., adversarial training) often fail to generalize and degrade clean accuracy. We introduce Vehicle Vision–Language Models (V2LMs), fine-tuned Vision Language Models (VLMs) special- ized for AV perception, and show that they are inherently more robust to unseen attacks without adversarial training, maintain- ing substantially higher adversarial accuracy than conventional DNNs. We study two deployments: Solo (task-specific V2LMs) and Tandem (a single V2LM for all three tasks). Under attacks, DNNs drop 33%–74%, whereas V2LMs decline by under 8% on average. Tandem achieves comparable robustness to Solo while being more memory-efficient. We also explore integrating V2LMs in parallel with existing perception stacks to enhance resilience. Our results suggest V2LMs are a promising path to- ward secure, robust AV perception.more » « less
-
Autonomous driving (AD) systems rely heavily on accurate lane marker detection for safe navigation, particularly during nighttime or low-light conditions. While luminescent lane markers have been introduced to improve visibility and enhance road safety in these scenarios, they also introduce potential vulnerabilities. This paper investigates these risks by introducing novel luminescent adversarial attacks that exploit the lane detection models used in autonomous vehicles (AVs). We demonstrate how these attacks, targeting deep neural network-based perception models, can manipulate the textural properties of the markers to cause misdetection of lanes, leading to safety violations. Through comprehensive experiments in both digital and physical domains, we systematically expose the vulnerabilities of state-of-the-art lane detection models to adversarial luminescent markers. In our digital experiments, we observe complete model failure in the worst cases and a failure rate of approximately 33% in the best cases. Physical experiments using a device running Openpilot further confirm these risks, underscoring a significant safety threat posed by luminescent adversarial attacks. Our findings emphasize the need for robust defenses to protect AVs from such adversarial threats.more » « less
An official website of the United States government

Full Text Available