skip to main content


Title: PMNet: Robust Pathloss Map Prediction via Supervised Learning
Pathloss prediction is an essential component of wireless network planning. While ray tracing based methods have been successfully used for many years, they require significant computational effort that may become prohibitive with the increased network densification and/or use of higher frequencies in 5G/B5G (beyond 5G) systems. In this paper, we propose and evaluate a data-driven and model-free pathloss prediction method, dubbed PMNet. This method uses a supervised learning approach: training a neural network (NN) with a limited amount of ray tracing (or channel measurement) data and map data and then predicting the pathloss over location with no ray tracing data with a high level of accuracy. Our proposed pathloss map prediction-oriented NN architecture, which is empowered by state-of-the-art computer vision techniques, outperforms other architectures that have been previously proposed (e.g., UNet, RadioUNet) in terms of accuracy while showing generalization capability. Moreover, PMNet trained on a 4-fold smaller dataset surpasses the other baselines (trained on a 4-fold larger dataset), corroborating the potential of PMNet.1  more » « less
Award ID(s):
2133655
NSF-PAR ID:
10480090
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
Proc. IEEE Globecom
Format(s):
Medium: X
Location:
Kuala Lumpur
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Identifying splice site regions is an important step in the genomic DNA sequencing pipelines of biomedical and pharmaceutical research. Within this research purview, efficient and accurate splice site detection is highly desirable, and a variety of computational models have been developed toward this end. Neural network architectures have recently been shown to outperform classical machine learning approaches for the task of splice site prediction. Despite these advances, there is still considerable potential for improvement, especially regarding model prediction accuracy, and error rate.

    Results

    Given these deficits, we propose EnsembleSplice, an ensemble learning architecture made up of four (4) distinct convolutional neural networks (CNN) model architecture combination that outperform existing splice site detection methods in the experimental evaluation metrics considered including the accuracies and error rates. We trained and tested a variety of ensembles made up of CNNs and DNNs using the five-fold cross-validation method to identify the model that performed the best across the evaluation and diversity metrics. As a result, we developed our diverse and highly effective splice site (SS) detection model, which we evaluated using two (2) genomicHomo sapiensdatasets and theArabidopsis thalianadataset. The results showed that for of theHomo sapiensEnsembleSplice achieved accuracies of 94.16% for one of the acceptor splice sites and 95.97% for donor splice sites, with an error rate for the sameHomo sapiensdataset, 4.03% for the donor splice sites and 5.84% for theacceptor splice sites datasets.

    Conclusions

    Our five-fold cross validation ensured the prediction accuracy of our models are consistent. For reproducibility, all the datasets used, models generated, and results in our work are publicly available in our GitHub repository here:https://github.com/OluwadareLab/EnsembleSplice

     
    more » « less
  2. Millimeter-wave (mmWave) communications have been regarded as one of the most promising solutions to deliver ultra-high data rates in wireless local-area networks. A significant barrier to delivering consistently high rate performance is the rapid variation in quality of mmWave links due to blockages and small changes in user locations. If link quality can be predicted in advance, proactive resource allocation techniques such as link-quality-aware scheduling can be used to mitigate this problem. In this paper, we propose a link quality prediction scheme based on knowledge of the environment. We use geometric analysis to identify the shadowed regions that separate LoS and NLoS scenarios, and build LoS and NLoS link-quality predictors based on an analytical model and a regression-based approach, respectively. For the more challenging NLoS case, we use a synthetic dataset generator with accurate ray tracing analysis to train a deep neural network (DNN) to learn the mapping between environment features and link quality. We then use the DNN to efficiently construct a map of link quality predictions within given environments. Extensive evaluations with additional synthetically generated scenarios show a very high prediction accuracy for our solution. We also experimentally verify the scheme by applying it to predict link quality in an actual 802.11ad environment, and the results show a close agreement between predicted values and measurements of link quality. 
    more » « less
  3. Abstract

    Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research.

    Significance Statement

    In recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.

     
    more » « less
  4. null (Ed.)
    To address the needs of emerging bandwidth-intensive applications in 5G and beyond era, the millimeter-wave (mmWave) band with very large spectrum availability have been recognized as a promising choice for future wireless communications. In particular, IEEE 802.11ad/ay operating on 60 GHz carrier frequency is a highly anticipated wireless local area network (WLAN) technology for supporting ultra-high-rate data transmissions. In this paper, we describe additions to the ns-3 802.11ad simulator that include 3D obstacle specifications, line-of-sight calculations, and a sparse cluster-based channel model, which allow researchers to study complex mmWave Wi-Fi network deployments under more realistic conditions. We also study the performance accuracy and simulation efficiency of the implemented statistical channel model as compared to a deterministic ray-tracing based channel model. Through extensive ns-3 simulations, the results show that the implemented channel model has the potential to achieve good accuracy in performance evaluation while improving simulation efficiency. We also provide a detailed parametric analysis on the statistical channel model, which yields insight on how to properly tune the model parameters to further improve performance accuracy. 
    more » « less
  5. null (Ed.)
    Given its demonstrated ability in analyzing and revealing patterns underlying data, Deep Learning (DL) has been increasingly investigated to complement physics-based models in various aspects of smart manufacturing, such as machine condition monitoring and fault diagnosis, complex manufacturing process modeling, and quality inspection. However, successful implementation of DL techniques relies greatly on the amount, variety, and veracity of data for robust network training. Also, the distributions of data used for network training and application should be identical to avoid the internal covariance shift problem that reduces the network performance applicability. As a promising solution to address these challenges, Transfer Learning (TL) enables DL networks trained on a source domain and task to be applied to a separate target domain and task. This paper presents a domain adversarial TL approach, based upon the concepts of generative adversarial networks. In this method, the optimizer seeks to minimize the loss (i.e., regression or classification accuracy) across the labeled training examples from the source domain while maximizing the loss of the domain classifier across the source and target data sets (i.e., maximizing the similarity of source and target features). The developed domain adversarial TL method has been implemented on a 1-D CNN backbone network and evaluated for prediction of tool wear propagation, using NASA's milling dataset. Performance has been compared to other TL techniques, and the results indicate that domain adversarial TL can successfully allow DL models trained on certain scenarios to be applied to new target tasks. 
    more » « less