skip to main content


Search for: All records

Award ID contains: 1736209

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We develop a decentralized colouring approach to diversify the nodes in a complex network. The key is the introduction of a local conflict index (LCI) that measures the colour conflicts arising at each node which can be efficiently computed using only local information. We demonstrate via both synthetic and real-world networks that the proposed approach significantly outperforms random colouring as measured by the size of the largest colour-induced connected component. Interestingly, for scale-free networks further improvement of diversity can be achieved by tuning a degree-biasing weighting parameter in the LCI.

     
    more » « less
  2. Free, publicly-accessible full text available September 11, 2024
  3. Free, publicly-accessible full text available September 8, 2024
  4. Free, publicly-accessible full text available July 12, 2024
  5. Free, publicly-accessible full text available June 1, 2024
  6. Continuous Integration (CI) practices encourage developers to frequently integrate code into a shared repository. Each integration is validated by automatic build and testing such that errors are revealed as early as possible. When CI failures or integration errors are reported, existing techniques are insufficient to automatically locate the root causes for two reasons. First, a CI failure may be triggered by faults in source code and/or build scripts, while current approaches consider only source code. Second, a tentative integration can fail because of build failures and/or test failures, while existing tools focus on test failures only. This paper presents UniLoc, the first unified technique to localize faults in both source code and build scripts given a CI failure log, without assuming the failure’s location (source code or build scripts) and nature (a test failure or not). Adopting the information retrieval (IR) strategy, UniLoc locates buggy files by treating source code and build scripts as documents to search and by considering build logs as search queries. However, instead of naïvely applying an off-the-shelf IR technique to these software artifacts, for more accurate fault localization, UniLoc applies various domain-specific heuristics to optimize the search queries, search space, and ranking formulas. To evaluate UniLoc, we gathered 700 CI failure fixes in 72 open-source projects that are built with Gradle. UniLoc could effectively locate bugs with the average MRR (Mean Reciprocal Rank) value as 0.49, MAP (Mean Average Precision) value as 0.36, and NDCG (Normalized Discounted Cumulative Gain) value as 0.54. UniLoc outperformed the state-of-the-art IR-based tool BLUiR and Locus. UniLoc has the potential to help developers diagnose root causes for CI failures more accurately and efficiently. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  7. In the United States and around the world, gun violence has become a long-standing public safety concern and a security threat, due to violent gun-related crimes, injuries, and fatalities. Although legislators and lawmakers have attempted to mitigate its threats through legislation, research on gun violence confirms the need for a comprehensive approach to gun violence prevention. This entails addressing the problem in as many ways as possible, such as through legislation, new technological advancements, re-engineering supply, and administrative protocols, among others. The research focuses on the technological, supply, and administrative aspects, in which we propose a manner of managing gun-related data efficiently from the point of manufacture/sale, as well as at points of transfers between secondary sellers for the improvement of criminal investigation processes. Making data more readily available with greater integrity will facilitate successful investigations and prosecutions of gun crimes. Currently, there is no single and uniform platform for firearm manufacturers, dealers, and other stakeholders involved in firearm sales, dissemination, management, and investigation. With the help of Blockchain technology, gun registry, ownership, transfers, and, most importantly, investigations, when crimes occur, can all be managed efficiently, breaking the cycle of gun violence. The identification of guns, gun tracing, and identification of gun owners/possessors rely on accuracy, integrity, and consistency in related systems to influence gun crime investigation processes. Blockchain technology, which uses a consensus-based approach to improve processes and transactions, is demonstrated in this study as a way to enhance these procedures. To the best of our knowledge, this is the first study to explore and demonstrate the utility of Blockchain for gun-related criminal investigations using a design science approach. 
    more » « less
    Free, publicly-accessible full text available March 31, 2024
  8. Convolutional Neural Networks (CNN) continue to revolutionize image recognition technology and are being used in non-image related fields such as cybersecurity. They are known to work as feature extractors, identifying patterns within large data sets, but when dealing with nonnatural data, what these features represent is not understood. Several class activation map (CAM) visualization tools are available that assist with understanding the CNN decisions when used with images, but they are not intuitively comprehended when dealing with nonnatural security data. Understanding what the extracted features represent should enable the data analyst and model architect tailor a model to maximize the extracted features while minimizing the computational parameters. In this paper we offer a new tool Model integrated Class Activation Maps, (MiCAM) which allows the analyst the ability to visually compare extracted feature intensities at the individual layer detail. We explore using this new tool to analyse several datasets. First the MNIST handwriting data set to gain a baseline understanding. We then analyse two security data sets: computers process metrics from cloud based application servers that are infected with malware and the CIC-IDS-2017 IP data traffic set and identify how re-ordering nonnatural security related data affects feature extraction performance and identify how reordering the data affect feature extraction performance.

     
    more » « less