Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Solomon, Latasha ; Schwartz, Peter J. (Ed.)In recent years, computer vision has made significant strides in enabling machines to perform a wide range of tasks, from image classification and segmentation to image generation and video analysis. It is a rapidly evolving field that aims to enable machines to interpret and understand visual information from the environment. One key task in computer vision is image classification, where algorithms identify and categorize objects in images based on their visual features. Image classification has a wide range of applications, from image search and recommendation systems to autonomous driving and medical diagnosis. However, recent research has highlighted the presence of bias in image classification algorithms, particularly with respect to human-sensitive attributes such as gender, race, and ethnicity. Some examples are computer programmers being predicted better in the context of men in images compared to women, and the accuracy of the algorithm being better on greyscale images compared to colored images. This discrepancy in identifying objects is developed through correlation the algorithm learns from the objects in context known as contextual bias. This bias can result in inaccurate decisions, with potential consequences in areas such as hiring, healthcare, and security. In this paper, we conduct an empirical study to investigate bias in the image classification domain based on sensitive attribute gender using deep convolutional neural networks (CNN) through transfer learning and minimize bias within the image context using data augmentation to improve overall model performance. In addition, cross-data generalization experiments are conducted to evaluate model robustness across popular open-source image datasets.more » « lessFree, publicly-accessible full text available June 12, 2024
-
Sensor-powered devices offer safe global connections; cloud scalability and flexibility, and new business value driven by data. The constraints that have historically obstructed major innovations in technology can be addressed by advancements in Artificial Intelligence (AI) and Machine Learning (ML), cloud, quantum computing, and the ubiquitous availability of data. Edge AI (Edge Artificial Intelligence) refers to the deployment of AI applications on the edge device near the data source rather than in a cloud computing environment. Although edge data has been utilized to make inferences in real-time through predictive models, real-time machine learning has not yet been fully adopted. Real-time machine learning utilizes real-time data to learn on the go, which helps in faster and more accurate real-time predictions and eliminates the need to store data eradicating privacy issues. In this article, we present the practical prospect of developing a physical threat detection system using real-time edge data from security cameras/sensors to improve the accuracy, efficiency, reliability, security, and privacy of the real-time inference model.more » « lessFree, publicly-accessible full text available March 14, 2024
-
Pham, Tien ; Solomon, Latasha ; Hohil, Myron E. (Ed.)Explainable Artificial Intelligence (XAI) is the capability of explaining the reasoning behind the choices made by the machine learning (ML) algorithm which can help understand and maintain the transparency of the decision-making capability of the ML algorithm. Humans make thousands of decisions every day in their lives. Every decision an individual makes, they can explain the reasons behind why they made the choices that they made. Nonetheless, it is not the same in the case of ML and AI systems. Furthermore, XAI was not wideley researched until suddenly the topic was brought forward and has been one of the most relevant topics in AI for trustworthy and transparent outcomes. XAI tries to provide maximum transparency to a ML algorithm by answering questions about how models effectively came up with the output. ML models with XAI will have the ability to explain the rationale behind the results, understand the weaknesses and strengths the learning models, and be able to see how the models will behave in the future. In this paper, we investigate XAI for algorithmic trustworthiness and transparency. We evaluate XAI using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process.more » « less
-
null (Ed.)Cyber-threats are continually evolving and growing in numbers and extreme complexities with the increasing connectivity of the Internet of Things (IoT). Existing cyber-defense tools seem not to deter the number of successful cyber-attacks reported worldwide. If defense tools are not seldom, why does the cyber-chase trend favor bad actors? Although cyber-defense tools monitor and try to diffuse intrusion attempts, research shows the required agility speed against evolving threats is way too slow. One of the reasons is that many intrusion detection tools focus on anomaly alerts’ accuracy, assuming that pre-observed attacks and subsequent security patches are adequate. Well, that is not the case. In fact, there is a need for techniques that go beyond intrusion accuracy against specific vulnerabilities to the prediction of cyber-defense performance for improved proactivity. This paper proposes a combination of cyber-attack projection and cyber-defense agility estimation to dynamically but reliably augur intrusion detection performance. Since cyber-security is buffeted with many unknown parameters and rapidly changing trends, we apply a machine learning (ML) based hidden markov model (HMM) to predict intrusion detection agility. HMM is best known for robust prediction of temporal relationships mid noise and training brevity corroborating our high prediction accuracy on three major open-source network intrusion detection systems, namely Zeek, OSSEC, and Suricata. Specifically, we present a novel approach for combined projection, prediction, and cyber-visualization to enable precise agility analysis of cyber defense. We also evaluate the performance of the developed approach using numerical results.more » « less
-
null (Ed.)Edge Computing (EC) has seen a continuous rise in its popularity as it provides a solution to the latency and communication issues associated with edge devices transferring data to remote servers. EC achieves this by bringing the cloud closer to edge devices. Even though EC does an excellent job of solving the latency and communication issues, it does not solve the privacy issues associated with users transferring personal data to the nearby edge server. Federated Learning (FL) is an approach that was introduced to solve the privacy issues associated with data transfers to distant servers. FL attempts to resolve this issue by bringing the code to the data, which goes against the traditional way of sending the data to remote servers. In FL, the data stays on the source device, and a Machine Learning (ML) model used to train the local data is brought to the end device instead. End devices train the ML model using local data and then send the model updates back to the server for aggregation. However, this process of asking random devices to train a model using its local data has potential risks such as a participant poisoning the model using malicious data for training to produce bogus parameters. In this paper, an approach to mitigate data poisoning attacks in a federated learning setting is investigated. The application of the approach is highlighted, and the practical and secure nature of this approach is illustrated as well using numerical results.more » « less
-
null (Ed.)Data falsification attack in Vehicular Ad hoc Networks (VANET) for the Internet of Vehicles (IoV) is achieved by corrupting the data exchanged between nodes with false information. Data is the most valuable asset these days from which many analyses and results can be drawn out. But the privacy concern raised by users has become the greatest hindrance in performing data analysis. In IoV, misbehavior detection can be performed by creating a machine learning model from basic safety message (BSM) dataset of vehicles. We propose a privacy-preserving misbehavior detecting system for IoV using Federated Machine Learning. Vehicles in VANET for IoV are given the initial dull model to locally train using their own local data. On doing this we get a collective smart model that can classify Position Falsification attack in VANET using the data generated by each vehicle. All this is done without actually sharing the data with any third party to perform analysis. In this paper, we compare the performance of the attack detection model trained by using a federated and central approach. This training method trains the model on a different kind of position falsification attack by using local BSM data generated on each vehicle.more » « less