skip to main content


Search for: All records

Award ID contains: 2100134

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Quantum machine learning (QML) is an emerging field of research that leverages quantum computing to improve the classical machine learning approach to solve complex real world problems. QML has the potential to address cybersecurity related challenges. Considering the novelty and complex architecture of QML, resources are not yet explicitly available that can pave cybersecurity learners to instill efficient knowledge of this emerging technology. In this research, we design and develop QML-based ten learning modules covering various cybersecurity topics by adopting student centering case-study based learning approach. We apply one subtopic of QML on a cybersecurity topic comprised of pre-lab, lab, and post-lab activities towards providing learners with hands-on QML experiences in solving real-world security problems. In order to engage and motivate students in a learning environment that encourages all students to learn, pre-lab offers a brief introduction to both the QML subtopic and cybersecurity problem. In this paper, we utilize quantum support vector machine (QSVM) for malware classification and protection where we use open source Pennylane QML framework on the drebin 215 dataset. We demonstrate our QSVM model and achieve an accuracy of 95% in malware classification and protection. We will develop all the modules and introduce them to the cybersecurity community in the coming days. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  2. This survey paper provides an overview of the current state of Artificial Intelligence (AI) attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  3. The main objective of authentic learning is to offer students an exciting and stimulating educational setting that provides practical experiences in tackling real-world security issues. Each educational theme is composed of pre-lab, lab, and post-lab activities. Through the application of authentic learning, we create and produce portable lab equipment for AI Security and Privacy on Google CoLab. This enables students to access and practice these hands-on labs conveniently and without the need for time-consuming installations and configurations. As a result, students can concentrate more on learning concepts and gain more experience in hands-on problem-solving abilities 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  4. Software vulnerabilities have become a serious problem with the emergence of new applications that contain potentially vulnerable or malicious code that can compromise the system. The growing volume and complexity of software source codes have opened a need for vulnerability detection methods to successfully predict malicious codes before being the prey of cyberattacks. As leveraging humans to check sources codes requires extensive time and resources and preexisting static code analyzers are unable to properly detect vulnerable codes. Thus, artificial intelligence techniques, mainly deep learning models, have gained traction to detect source code vulnerability. A systematic review is carried out to explore and understand the various deep learning methods employed for the task and their efficacy as a prediction model. Additionally, a summary of each process and its characteristics are examined and its implementation on specific data sets and their evaluation will be discussed. 
    more » « less
  5. With the rapid technological advancement, security has become a major issue due to the increase in malware activity that poses a serious threat to the security and safety of both computer systems and stakeholders. To maintain stakeholder’s, particularly, end user’s security, protecting the data from fraudulent efforts is one of the most pressing concerns. A set of malicious programming code, scripts, active content, or intrusive software that is designed to destroy intended computer systems and programs or mobile and web applications is referred to as malware. According to a study, naive users are unable to distinguish between malicious and benign applications. Thus, computer systems and mobile applications should be designed to detect malicious activities towards protecting the stakeholders. A number of algorithms are available to detect malware activities by utilizing novel concepts including Artificial Intelligence, Machine Learning, and Deep Learning. In this study, we emphasize Artificial Intelligence (AI) based techniques for detecting and preventing malware activity. We present a detailed review of current malware detection technologies, their shortcomings, and ways to improve efficiency. Our study shows that adopting futuristic approaches for the development of malware detection applications shall provide significant advantages. The comprehension of this synthesis shall help researchers for further research on malware detection and prevention using AI. 
    more » « less
  6. Traditional network intrusion detection approaches encounter feasibility and sustainability issues to combat modern, sophisticated, and unpredictable security attacks. Deep neural networks (DNN) have been successfully applied for intrusion detection problems. The optimal use of DNN-based classifiers requires careful tuning of the hyper-parameters. Manually tuning the hyperparameters is tedious, time-consuming, and computationally expensive. Hence, there is a need for an automatic technique to find optimal hyperparameters for the best use of DNN in intrusion detection. This paper proposes a novel Bayesian optimization-based framework for the automatic optimization of hyperparameters, ensuring the best DNN architecture. We evaluated the performance of the proposed framework on NSL-KDD, a benchmark dataset for network intrusion detection. The experimental results show the framework’s effectiveness as the resultant DNN architecture demonstrates significantly higher intrusion detection performance than the random search optimization-based approach in terms of accuracy, precision, recall, and f1-score. 
    more » « less
  7. Machine Learning (ML) analyze, and process data and develop patterns. In the case of cybersecurity, it helps to better analyze previous cyber attacks and develop proactive strategy to detect, prevent the security threats. Both ML and cybersecurity are important subjects in computing curriculum but ML for security is not well presented there. We design and develop case-study based portable labware on Google CoLab for ML to cybersecurity so that students can access, share, collaborate, and practice these hands-on labs anywhere and anytime without time tedious installation and configuration which will help students more focus on learning of concepts and getting more experience for hands-on problem solving skills. 
    more » « less