For the past decade, botnets have dominated network attacks in spite of significant research advances in defending against them. The distributed attack sources, the network size, and the diverse botnet attack techniques challenge the effectiveness of a single-point centralized security solution. This paper proposes a distributed security system against large-scale disruptive botnet attacks by using SDN/NFV and machine-learning. In our system, a set of distributed network functions detect network attacks for each protocol and to collect real-time traffic information, which also gets relayed to the SDN controller for more sophisticated analyses. The SDN controller then analyzes the real-time traffic with the only forwarded information using machine learning and updates the flow rule or take routing/bandwidth-control measures, which get executed on the nodes implementing the security network functions. Our evaluations show the proposed system to be an efficient and effective defense method against botnet attacks. The evaluation results demonstrated that the proposed system detects large-scale distributed network attacks from botnets at the SDN controller while the network functions locally detect known attacks across different networking protocols.
more »
« less
Using deep learning to solve computer security challenges: a survey
Abstract Although using machine learning techniques to solve computer security challenges is not a new idea, the rapidly emerging Deep Learning technology has recently triggered a substantial amount of interests in the computer security community. This paper seeks to provide a dedicated review of the very recent research works on using Deep Learning techniques to solve computer security challenges. In particular, the review covers eight computer security problems being solved by applications of Deep Learning: security-oriented program analysis, defending return-oriented programming (ROP) attacks, achieving control-flow integrity (CFI), defending network attacks, malware classification, system-event-based anomaly detection, memory forensics, and fuzzing for software security.
more »
« less
- Award ID(s):
- 1814679
- PAR ID:
- 10296667
- Date Published:
- Journal Name:
- Cybersecurity
- Volume:
- 3
- Issue:
- 1
- ISSN:
- 2523-3246
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
For the past decade, botnets have dominated network attacks in spite of significant research advances in defending against them. The distributed attack sources, the network size, and the diverse botnet attack techniques challenge the effectiveness of a single-point centralized security solution. This paper proposes a distributed security system against largescale disruptive botnet attacks by using SDN/NFV and machinelearning. In our system, a set of distributed network functions detect network attacks for each protocol and to collect real-time traffic information, which also gets relayed to the SDN controller for more sophisticated analyses. The SDN controller then analyzes the real-time traffic with the only forwarded information using machine learning and updates the flow rule or take routing/bandwidth-control measures, which get executed on the nodes implementing the security network functions. Our evaluations show the proposed system to be an efficient and effective defense method against botnet attacks. The evaluation results demonstrated that the proposed system detects large-scale distributed network attacks from botnets at the SDN controller while the network functions locally detect known attacks across different networking protocols.more » « less
-
In this tutorial, we present our recent work on building trusted, resilient and interpretable AI models by combining symbolic methods developed for automated reasoning with connectionist learning methods that use deep neural networks. The increasing adoption of artificial intelligence and machine learning in systems, including safety-critical systems, has created a pressing need for developing scalable techniques that can be used to establish trust over their safe behavior, resilience to adversarial attacks, and interpretability to enable human audits. This tutorial is comprised of three components: review of techniques for verification of neural networks, methods for using geometric invariants to defend against adversarial attacks, and techniques for extracting logical symbolic rules by reverse engineering machine learning models. These techniques form the core of TRINITY: Trusted, Resilient and Interpretable AI framework being developed at SRI. In this tutorial, we identify the key challenges in building the TRINITY framework, and report recent results on each of these three fronts.more » « less
-
With the rising adoption of deep neural networks (DNNs) for commercial and high-stakes applications that process sensitive user data and make critical decisions, security concerns are paramount. An adversary can undermine the confidentiality of user input or a DNN model, mislead a DNN to make wrong predictions, or even render a machine learning application unavailable to valid requests. While security vulnerabilities that enable such exploits can exist across multiple levels of the technology stack that supports machine learning applications, the hardware-level vulnerabilities can be particularly problematic. In this article, we provide a comprehensive review of the hardware-level vulnerabilities affecting domain-specific DNN inference accelerators and recent progress in secure hardware design to address these. As domain-specific DNN accelerators have a number of differences compared to general-purpose processors and cryptographic accelerators where the hardware-level vulnerabilities have been thoroughly investigated, there are unique challenges and opportunities for secure machine learning hardware. We first categorize the hardware-level vulnerabilities into three scenarios based on an adversary’s capability: 1) an adversary can only attack the off-chip components, such as the off-chip DRAM and the data bus; 2) an adversary can directly attack the on-chip structures in a DNN accelerator; and 3) an adversary can insert hardware trojans during the manufacturing and design process. For each category, we survey recent studies on attacks that pose practical security challenges to DNN accelerators. Then, we present recent advances in the defense solutions for DNN accelerators, addressing those security challenges with circuit-, architecture-, and algorithm-level techniques.more » « less
-
We study the problem of defending deep neural network approaches for image classification from physically realizable attacks. First, we demonstrate that the two most scalable and effective methods for learning robust models, adversarial training with PGD attacks and randomized smoothing, exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image, and develop two approaches for efficiently computing the resulting adversarial examples. Finally, we demonstrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks we study, offering the first effective generic defense against such attacks.more » « less
An official website of the United States government

