skip to main content


Search for: All records

Award ID contains: 1757945

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. While machine learning classifier models become more widely adopted, opaque “black-box” models remain mostly inscrutable for a variety of reasons. Since their applications increasingly involve decisions impacting the lives of humans, there is increasing demand that their predictions be understandable to humans. Of particular interest in eXplainable AI (XAI) is the interpretability of explanations, i.e., that a model’s prediction should be understandable in terms of the input features. One popular approach is LIME, which offers a model-agnostic framework for explaining any classifier. However, questions remain about the limitations and vulnerabilities of such post-hoc explainers. We have built a tool for generating synthetic tabular data sets which enables us to probe the explanation system opportunistically based on its architecture. In this paper, we report on our success in revealing a scenario where LIME’s explanation violates local faithfulness. 
    more » « less
  2. null (Ed.)
    Smart grids integrate advanced information and communication technologies (ICTs) into traditional power grids for more efficient and resilient power delivery and management, but also introduce new security vulnerabilities that can be exploited by adversaries to launch cyber attacks, causing severe consequences such as massive blackout and infrastructure damages. Existing machine learning-based methods for detecting cyber attacks in smart grids are mostly based on supervised learning, which need the instances of both normal and attack events for training. In addition, supervised learning requires that the training dataset includes representative instances of various types of attack events to train a good model, which is sometimes hard if not impossible. This paper presents a new method for detecting cyber attacks in smart grids using PMU data, which is based on semi-supervised anomaly detection and deep representation learning. Semi-supervised anomaly detection only employs the instances of normal events to train detection models, making it suitable for finding unknown attack events. A number of popular semi-supervised anomaly detection algorithms were investigated in our study using publicly available power system cyber attack datasets to identify the best-performing ones. The performance comparison with popular supervised algorithms demonstrates that semi-supervised algorithms are more capable of finding attack events than supervised algorithms. Our results also show that the performance of semi-supervised anomaly detection algorithms can be further improved by augmenting with deep representation learning. 
    more » « less
  3. null (Ed.)
    In this paper, we propose that the theory of planned behavior (TPB) with the additional factors of awareness and context-based information can be used to positively influence users' cybersecurity behavior. A research model based on TPB is developed and validated using a user study. As a proof-of-concept, we developed a mobile cybersecurity news app that incorporates context-based information such as location, search history, and usage information of other mobile apps into its article recommendations and warning notifications to address user awareness better. Through a survey of 100 participants, the proposed research model was validated, and it was confirmed that context-based information positively influences users' awareness in cybersecurity. 
    more » « less
  4. A mental model is a useful tool for describing user's general mental processes that go into certain actions. In this paper, we investigate how to enhance the usability of security applications by considering human factors. Specifically, we study how to better understand and develop the user's mental model in the context of computer security through the use of the reasoned action approach (RAA). RAA explains that a user's behavior is determined by her intention to perform the behavior and the intention is, in turn, a function of attitudes towards the behavior, perceived norms (or social pressure), and perceived behavior control (capacity and relevant skills/abilities). A user study was conducted to test the validity of each of the main components of the model. Our user study concluded that alterations to a computer security application improved by the analysis through the mental model created improved user behavior. 
    more » « less
  5. PDF, as one of most popular document file format, has been frequently utilized as a vector by attackers to covey malware due to its flexible file structure and the ability to embed different kinds of content. In this paper, we propose a new learning-based method to detect PDF malware using image processing and processing techniques. The PDF files are first converted to grayscale images using image visualization techniques. Then various image features representing the distinct visual characteristics of PDF malware and benign PDF files are extracted. Finally, learning algorithms are applied to create the classification models to classify a new PDF file as malicious or benign. The performance of the proposed method was evaluated using Contagio PDF malware dataset. The results show that the proposed method is a viable solution for PDF malware detection. It is also shown that the proposed method is more robust to resist reverse mimicry attacks than the state-of-art learning-based method. 
    more » « less