skip to main content


Search for: All records

Creators/Authors contains: "Chen, Qi A"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. null (Ed.)
    This paper proposes to use intrinsic examples as a DNN fingerprinting technique for the functionality verification of DNN models implemented on edge devices. The proposed intrinsic examples do not affect the normal DNN training and can enable the black-box testing capability for DNN models packaged into edge device applications. We provide three algorithms for deriving intrinsic examples of the pre-trained model (the model before the DNN system design and implementation procedure) to retrieve the knowledge learnt from the training dataset for the detection of adversarial third-party attacks such as transfer learning and fault injection attack that may happen during the system implementation procedure. Besides, they can accommodate the model transformations due to various DNN model compression methods used by the system designer. 
    more » « less
  2. Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models. However, in such visual perception pipeline the detected objects must also be tracked, in a process called Multiple Object Tracking (MOT), to build the moving trajectories of surrounding obstacles. Since MOT is designed to be robust against errors in object detection, it poses a general challenge to existing attack techniques that blindly target objection detection: we find that a success rate of over 98% is needed for them to actually affect the tracking results, a requirement that no existing attack technique can satisfy. In this paper, we are the first to study adversarial machine learning attacks against the complete visual perception pipeline in autonomous driving, and discover a novel attack technique, tracker hijacking, that can effectively fool MOT using AEs on object detection. Using our technique, successful AEs on as few as one single frame can move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards. We perform evaluation using the Berkeley Deep Drive dataset and find that on average when 3 frames are attacked, our attack can have a nearly 100% success rate while attacks that blindly target object detection only have up to 25%. 
    more » « less