skip to main content


Search for: All records

Creators/Authors contains: "Lin, Jing"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Deep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance. However, recent studies on adversarial examples, which have maliciously undetectable perturbations added to their original samples that are indistinguishable by human eyes but mislead the machine learning approaches, show that machine learning models are vulnerable to security attacks. Though various adversarial retraining techniques have been developed in the past few years, none of them is scalable. In this paper, we propose a new iterative adversarial retraining approach to robustify the model and to reduce the effectiveness of adversarial inputs on DNN models. The proposed method retrains the model with both Gaussian noise augmentation and adversarial generation techniques for better generalization. Furthermore, the ensemble model is utilized during the testing phase in order to increase the robust test accuracy. The results from our extensive experiments demonstrate that the proposed approach increases the robustness of the DNN model against various adversarial attacks, specifically, fast gradient sign attack, Carlini and Wagner (C&W) attack, Projected Gradient Descent (PGD) attack, and DeepFool attack. To be precise, the robust classifier obtained by our proposed approach can maintain a performance accuracy of 99% on average on the standard test set. Moreover, we empirically evaluate the runtime of two of the most effective adversarial attacks, i.e., C&W attack and BIM attack, to find that the C&W attack can utilize GPU for faster adversarial example generation than the BIM attack can. For this reason, we further develop a parallel implementation of the proposed approach. This parallel implementation makes the proposed approach scalable for large datasets and complex models.

     
    more » « less
  2. Abstract

    Nowadays, real‐world learning modules become vital components in computer science and engineering in general and cybersecurity in particular. However, as student enrollments have been dramatically increasing, it becomes more challenging for a university/college to keep up with the quality of education that offers hands‐on experiment training for students thoroughly. These challenges include the difficulty of providing sufficient computing resources and keep them upgraded for the increasing number of students. In order for higher education institutions to conquer such challenges, some educators introduce an alternative solution. Namely, they develop and deploy virtual lab experiments on the clouds such as Amazon AWS and the Global Environment for Network Innovations (GENI), where students can remotely access virtual resources for lab experiments. Besides, Software‐Defined Networks (SDN) are an emerging networking technology to enhance the security and performance of networked communications with simple management. In this article, we present our efforts to develop learning modules via an efficient deployment of SDN on GENI for computer networking and security education. Specifically, we first give our design methodology of the proposed learning modules, and then detail the implementations of the learning modules by starting from user account creation on the GENI testbed to advanced experimental GENI‐enabled SDN labs. It is worth pointing out that in order to accommodate students with different backgrounds and knowledge levels, we consider the varying difficulty levels of learning modules in our design. Finally, student assessment over these pedagogical efforts is discussed to demonstrate the efficiency of the proposed learning modules.

     
    more » « less