Mobile wireless networks present several challenges for any learning system, due to uncertain and variable device movement, a decentralized network architecture, and constraints on network resources. In this work, we use deep reinforcement learning (DRL) to learn a scalable and generalizable forwarding strategy for such networks. We make the following contributions: i) we use hierarchical RL to design DRL packet agents rather than device agents, to capture the packet forwarding decisions that are made over time and improve training efficiency; ii) we use relational features to ensure generalizability of the learned forwarding strategy to a wide range of network dynamics and enable offline training; and iii) we incorporate both forwarding goals and network resource considerations into packet decision-making by designing a weighted DRL reward function. Our results show that our DRL agent often achieves a similar delay per packet delivered as the optimal forwarding strategy and outperforms all other strategies including state-of-the-art strategies, even on scenarios on which the DRL agent was not trained.
more »
« less
Accurate Identification of IoT Devices in the Presence of Wireless Channel Dynamics
Identifying IoT devices is crucial for network monitoring, security enforcement, and inventory tracking. However, most existing identification methods rely on deep packet inspection, which raises privacy concerns and adds computational complexity. Moreover, existing works overlook the impact of wireless channel dynamics on the accuracy of layer-2 features, thereby limiting their effectiveness in real-world scenarios. In this work, we define and use the latency of specific probe-response packet exchanges, referred to as "device latency," as the main feature for device identification. Additionally, we reveal the critical impact of wireless channel dynamics on the accuracy of device identification based on device latency features. Specifically, this work introduces "accumulation score" as a novel approach to capturing fine-grained channel dynamics and their impact on device latency when training machine learning models. We implement the proposed methods and measure the accuracy and overhead of device identification in real-world scenarios. The results confirm that by incorporating the accumulation score for balanced data collection and training machine learning algorithms, we achieve an F1 score of over 97% for device identification, even amidst wireless channel dynamics, a significant improvement over the 75% F1 score achieved by disregarding the impact of channel dynamics on data collection and device latency.
more »
« less
- Award ID(s):
- 2138633
- PAR ID:
- 10584468
- Publisher / Repository:
- IEEE
- Date Published:
- ISSN:
- 2832-1421
- ISBN:
- 979-8-3503-8800-8
- Page Range / eLocation ID:
- 1 to 8
- Format(s):
- Medium: X
- Location:
- Normandy, France
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Kajtoch, Ćukasz (Ed.)This study presents an initial model for bark beetle identification, serving as a foundational step toward developing a fully functional and practical identification tool. Bark beetles are known for extensive damage to forests globally, as well as for uniform and homoplastic morphology which poses identification challenges. Utilizing a MaxViT-based deep learning backbone which utilizes local and global attention to classify bark beetles down to the genus level from images containing multiple beetles. The methodology involves a process of image collection, preparation, and model training, leveraging pre-classified beetle species to ensure accuracy and reliability. The model's F1 score estimates of 0.99 and 1.0 indicates a strong ability to accurately classify genera in the collected data, including those previously unknown to the model. This makes it a valuable first step towards building a tool for applications in forest management and ecological research. While the current model distinguishes among 12 genera, further refinement and additional data will be necessary to achieve reliable species-level identification, which is particularly important for detecting new invasive species. Despite the controlled conditions of image collection and potential challenges in real-world application, this study provides the first model capable of identifying the bark beetle genera, and by far the largest training set of images for any comparable insect group. We also designed a function that reports if a species appears to be unknown. Further research is suggested to enhance the model's generalization capabilities and scalability, emphasizing the integration of advanced machine learning techniques for improved species classification and the detection of invasive or undescribed species.more » « less
-
On-chip learning with compute-in-memory (CIM) paradigm has become popular in machine learning hardware design in the recent years. However, it is hard to achieve high on-chip learning accuracy due to the high nonlinearity in the weight update curve of emerging nonvolatile memory (eNVM) based analog synapse devices. Although digital synapse devices offer good learning accuracy, the row-by-row partial sum accumulation leads to high latency. In this paper, the methods to solve the aforementioned issues are presented with a device-to-algorithm level optimization. For analog synapses, novel hybrid precision synapses with good linearity and more advanced training algorithms are introduced to increase the on-chip learning accuracy. The latency issue for digital synapses can be solved by using parallel partial sum read-out scheme. All these features are included into the recently released MLP + NeuroSimV3.0, which is an in-house developed device-to-system evaluation framework for neuro-inspired accelerators based on CIM paradigm.more » « less
-
Radio Frequency (RF) device fingerprinting has been recognized as a potential technology for enabling automated wireless device identification and classification. However, it faces a key challenge due to the domain shift that could arise from variations in the channel conditions and environmental settings, potentially degrading the accuracy of RF-based device classification when testing and training data is collected in different domains. This paper introduces a novel solution that leverages contrastive learning to mitigate this domain shift problem. Contrastive learning, a state-of-the-art self-supervised learning approach from deep learning, learns a distance metric such that positive pairs are closer (i.e. more similar) in the learned metric space than negative pairs. When applied to RF fingerprinting, our model treats RF signals from the same transmission as positive pairs and those from different transmissions as negative pairs. Through experiments on wireless and wired RF datasets collected over several days, we demonstrate that our contrastive learning approach captures domain-invariant features, diminishing the effects of domain-specific variations. Our results show large and consistent improvements in accuracy (10.8% to 27.8%) over baseline models, thus underscoring the effectiveness of contrastive learning in improving device classification under domain shift.more » « less
-
Today, numerous machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of wireless networks. Distributed real-time ML solutions are highly susceptible to the so-called straggler effect caused by resource heterogeneity, which can be mitigated by various computation offloading mechanisms that severely impact communication efficiency, especially in large-scale scenarios. To reduce the communication overhead, we leverage device-to-device (D2D) connectivity, which enhances spectrum utilization and allows for efficient data exchange between proximate devices. In particular, we design a novel D2D-aided coded distributed learning method named D2D-CDL for efficient load balancing across devices. The proposed solution captures system dynamics, including data (time-varying learning model, irregular intensity of data arrivals), device (diverse computational resources and volume of training data), and deployment (different locations and D2D graph connectivity). To decrease the number of communication rounds, we derive an optimal compression rate, which minimizes the processing time. The resulting optimization problem provides suboptimal compression parameters that improve the total training time. Our proposed method is particularly beneficial for real-time collaborative applications, where users continuously generate training data.more » « less
An official website of the United States government

