skip to main content


Search for: All records

Award ID contains: 1718929

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Despite AI’s significant growth, its “black box” nature creates challenges in generating adequate trust. Thus, it is seldom utilized as a standalone unit in high-risk applications. Explainable AI (XAI) has emerged to help with this problem. Designing effectively fast and accurate XAI is still challenging, especially in numerical applications. We propose a novel XAI model named Transparency Relying Upon Statistical Theory (TRUST) for XAI. TRUST XAI models the statistical behavior of the underlying AI’s outputs. Factor analysis is used to transform the input features into a new set of latent variables. We use mutual information to rank these parameters and pick only the most influential ones on the AI’s outputs and call them “representatives” of the classes. Then we use multi-model Gaussian distributions to determine the likelihood of any new sample belonging to each class. The proposed technique is a surrogate model that is not dependent on the type of the underlying AI. TRUST is suitable for any numerical application. Here, we use cybersecurity of the industrial internet of things (IIoT) as an example application. We analyze the performance of the model using three different cybersecurity datasets, including “WUSTLIIoT”, “NSL-KDD”, and “UNSW”. We also show how TRUST is explained to the user. The TRUST XAI provides explanations for new random samples with an average success rate of 98%. Also, the advantages of our model over another popular XAI model, LIME, including performance, speed, and the method of explainability are evaluated. 
    more » « less
  2. null (Ed.)
    Efficient provisioning of 5G network slices is a major challenge for 5G network slicing technology. Previous slice provisioning methods have only considered network resource attributes and ignored network topology attributes. These methods may result in a decrease in the slice acceptance ratio and the slice provisioning revenue. To address these issues, we propose a two-stage heuristic slice provisioning algorithm, called RT-CSP, for the 5G core network by jointly considering network resource attributes and topology attributes in this paper. The first stage of our method is called the slice node provisioning stage, in which we propose an approach to scoring and ranking nodes using network resource attributes (i.e., CPU capacity and bandwidth) and topology attributes (i.e., degree centrality and closeness centrality). Slice nodes are then provisioned according to the node ranking results. In the second stage, called the slice link provisioning stage, the k-shortest path algorithm is implemented to provision slice links. To further improve the performance of RT-CSP, we propose RT-CSP+, which uses our designed strategy, called minMaxBWUtilHops, to select the best physical path to host the slice link. The strategy minimizes the product of the maximum link bandwidth utilization of the candidate physical path and the number of hops in it to avoid creating bottlenecks in the physical path and reduce the bandwidth cost. Using extensive simulations, we compared our results with those of the state-of-the-art algorithms. The experimental results show that our algorithms increase slice acceptance ratio and improve the provisioning revenue-to-cost ratio. 
    more » « less
  3. Fault and performance management systems, in the traditional carrier networks, are based on rule-based diagnostics that correlate alarms and other markers to detect and localize faults and performance issues. As carriers move to Virtual Network Services, based on Network Function Virtualization and multi-cloud deployments, the traditional methods fail to deliver because of the intangibility of the constituent Virtual Network Functions and increased complexity of the resulting architecture. In this paper, we propose a framework, called HYPER-VINES, that interfaces with various management platforms involved to process markers through a system of shallow and deep machine learning models. It then detects and localizes manifested and impending fault and performance issues. Our experiments validate the functionality and feasibility of the framework in terms of accurate detection and localization of such issues and unambiguous prediction of impending issues. Simulations with real network fault datasets show the effectiveness of its architecture in large networks. 
    more » « less