skip to main content


Search for: All records

Award ID contains: 1929771

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available February 27, 2025
  2. Free, publicly-accessible full text available February 27, 2025
  3. Free, publicly-accessible full text available February 26, 2025
  4. Free, publicly-accessible full text available February 26, 2025
  5. Free, publicly-accessible full text available October 2, 2024
  6. Free, publicly-accessible full text available October 2, 2024
  7. Free, publicly-accessible full text available October 1, 2024
  8. LiDAR (Light Detection And Ranging) is an indispensable sensor for precise long- and wide-range 3D sensing, which directly benefited the recent rapid deployment of autonomous driving (AD). Meanwhile, such a safety-critical application strongly motivates its security research. A recent line of research demonstrates that one can manipulate the LiDAR point cloud and fool object detection by firing malicious lasers against LiDAR. However, these efforts evaluate only a specific LiDAR (VLP-16) and do not consider the state-of-the-art defense mechanisms in the recent LiDARs, so-called next-generation LiDARs. In this WIP work, we report our recent progress in the security analysis of the next-generation LiDARs. We identify a new type of LiDAR spoofing attack applicable to a much more general and recent set of LiDARs. We find that our attack can remove >72% of points in a 10×10 m2 area and can remove real vehicles in the physical world. We also discuss our future plans. 
    more » « less
  9. All vehicles must follow the rules that govern traffic behavior, regardless of whether the vehicles are human-driven or Connected, Autonomous Vehicles (CAVs). Road signs indicate locally active rules, such as speed limits and requirements to yield or stop. Recent research has demonstrated attacks, such as adding stickers or dark patches to signs, that cause CAV sign misinterpretation, resulting in potential safety issues. Humans can see and potentially defend against these attacks. But humans can not detect what they can not observe. We have developed the first physical-world attack against CAV traffic sign recognition systems that is invisible to humans. Utilizing Infrared Laser Reflection (ILR), we implement an attack that affects CAV cameras, but humans can not perceive. In this work, we formulate the threat model and requirements for an ILR-based sign perception attack. Next, we evaluate attack effectiveness against popular, CNNbased traffic sign recognition systems. We demonstrate a 100% success rate against stop and speed limit signs in our laboratory evaluation. Finally, we discuss the next steps in our research. 
    more » « less
  10. Vehicles controlled by autonomous driving software (ADS) are expected to bring many social and economic benefits, but at the current stage not being broadly used due to concerns with regard to their safety. Virtual tests, where autonomous vehicles are tested in software simulation, are common practices because they are more efficient and safer compared to field operational tests. Specifically, search-based approaches are used to find particularly critical situations. These approaches provide an opportunity to automatically generate tests; however, systematically producing bug-revealing tests for ADS remains a major challenge. To address this challenge, we introduce DoppelTest, a test generation approach for ADSes that utilizes a genetic algorithm to discover bug-revealing violations by generating scenarios with multiple autonomous vehicles that account for traffic control (e.g., traffic signals and stop signs). Our extensive evaluation shows that DoppelTest can efficiently discover 123 bug-revealing violations for a production-grade ADS (Baidu Apollo) which we then classify into 8 unique bug categories. 
    more » « less