skip to main content


Title: Enhance Portable Radiograph for Fast and High Accurate COVID-19 Monitoring
This work aimed to assist physicians by improving their speed and diagnostic accuracy when interpreting portable CXRs as well as monitoring the treatment process to see whether a patient is improving or deteriorating with treatment. These objectives are in especially high demand in the setting of the ongoing COVID-19 pandemic. With the recent progress in the development of artificial intelligence (AI), we introduce new deep learning frameworks to align and enhance the quality of portable CXRs to be more consistent, and to more closely match higher quality conventional CXRs. These enhanced portable CXRs can then help the doctors provide faster and more accurate diagnosis and treatment planning. The contributions of this work are four-fold. Firstly, a new database collection of subject-pair radiographs is introduced. For each subject, we collected a pair of samples from both portable and conventional machines. Secondly, a new deep learning approach is presented to align the subject-pairs dataset to obtain a pixel-pairs dataset. Thirdly, a new PairFlow approach is presented, an end-to-end invertible transfer deep learning method, to enhance the degraded quality of portable CXRs. Finally, the performance of the proposed system is evaluated by UAMS doctors in terms of both image quality and topological properties. This work was undertaken in collaboration with the Department of Radiology at the University of Arkansas for Medical Sciences (UAMS) to enhance portable/mobile COVID-19 CXRs, to improve the speed and accuracy of portable CXR images and aid in urgent COVID-19 diagnosis, monitoring and treatment.  more » « less
Award ID(s):
1946391
NSF-PAR ID:
10321623
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Diagnostics
Volume:
11
Issue:
6
ISSN:
2075-4418
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Idiopathic pulmonary fibrosis (IPF) is a progressive, irreversible, and usually fatal lung disease of unknown reasons, generally affecting the elderly population. Early diagnosis of IPF is crucial for triaging patients’ treatment planning into anti‐fibrotic treatment or treatments for other causes of pulmonary fibrosis. However, current IPF diagnosis workflow is complicated and time‐consuming, which involves collaborative efforts from radiologists, pathologists, and clinicians and it is largely subject to inter‐observer variability.

    Purpose

    The purpose of this work is to develop a deep learning‐based automated system that can diagnose subjects with IPF among subjects with interstitial lung disease (ILD) using an axial chest computed tomography (CT) scan. This work can potentially enable timely diagnosis decisions and reduce inter‐observer variability.

    Methods

    Our dataset contains CT scans from 349 IPF patients and 529 non‐IPF ILD patients. We used 80% of the dataset for training and validation purposes and 20% as the holdout test set. We proposed a two‐stage model: at stage one, we built a multi‐scale, domain knowledge‐guided attention model (MSGA) that encouraged the model to focus on specific areas of interest to enhance model explainability, including both high‐ and medium‐resolution attentions; at stage two, we collected the output from MSGA and constructed a random forest (RF) classifier for patient‐level diagnosis, to further boost model accuracy. RF classifier is utilized as a final decision stage since it is interpretable, computationally fast, and can handle correlated variables. Model utility was examined by (1) accuracy, represented by the area under the receiver operating characteristic curve (AUC) with standard deviation (SD), and (2) explainability, illustrated by the visual examination of the estimated attention maps which showed the important areas for model diagnostics.

    Results

    During the training and validation stage, we observe that when we provide no guidance from domain knowledge, the IPF diagnosis model reaches acceptable performance (AUC±SD = 0.93±0.07), but lacks explainability; when including only guided high‐ or medium‐resolution attention, the learned attention maps are not satisfactory; when including both high‐ and medium‐resolution attention, under certain hyperparameter settings, the model reaches the highest AUC among all experiments (AUC±SD = 0.99±0.01) and the estimated attention maps concentrate on the regions of interests for this task. Three best‐performing hyperparameter selections according to MSGA were applied to the holdout test set and reached comparable model performance to that of the validation set.

    Conclusions

    Our results suggest that, for a task with only scan‐level labels available, MSGA+RF can utilize the population‐level domain knowledge to guide the training of the network, which increases both model accuracy and explainability.

     
    more » « less
  2. Hemanth, Jude (Ed.)
    Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes coronavirus disease 2019 (COVID-19). Imaging tests such as chest X-ray (CXR) and computed tomography (CT) can provide useful information to clinical staff for facilitating a diagnosis of COVID-19 in a more efficient and comprehensive manner. As a breakthrough of artificial intelligence (AI), deep learning has been applied to perform COVID-19 infection region segmentation and disease classification by analyzing CXR and CT data. However, prediction uncertainty of deep learning models for these tasks, which is very important to safety-critical applications like medical image processing, has not been comprehensively investigated. In this work, we propose a novel ensemble deep learning model through integrating bagging deep learning and model calibration to not only enhance segmentation performance, but also reduce prediction uncertainty. The proposed method has been validated on a large dataset that is associated with CXR image segmentation. Experimental results demonstrate that the proposed method can improve the segmentation performance, as well as decrease prediction uncertainty. 
    more » « less
  3. null (Ed.)
    Edge intelligence (EI) has received a lot of interest because it can reduce latency, increase efficiency, and preserve privacy. More significantly, as the Internet of Things (IoT) has proliferated, billions of portable and embedded devices have been interconnected, producing zillions of gigabytes on edge networks. Thus, there is an immediate need to push AI (artificial intelligence) breakthroughs within edge networks to achieve the full promise of edge data analytics. EI solutions have supported digital technology workloads and applications from the infrastructure level to edge networks; however, there are still many challenges with the heterogeneity of computational capabilities and the spread of information sources. We propose a novel event-driven deep-learning framework, called EDL-EI (event-driven deep learning for edge intelligence), via the design of a novel event model by defining events using correlation analysis with multiple sensors in real-world settings and incorporating multi-sensor fusion techniques, a transformation method for sensor streams into images, and lightweight 2-dimensional convolutional neural network (CNN) models. To demonstrate the feasibility of the EDL-EI framework, we presented an IoT-based prototype system that we developed with multiple sensors and edge devices. To verify the proposed framework, we have a case study of air-quality scenarios based on the benchmark data provided by the USA Environmental Protection Agency for the most polluted cities in South Korea and China. We have obtained outstanding predictive accuracy (97.65% and 97.19%) from two deep-learning models on the cities’ air-quality patterns. Furthermore, the air-quality changes from 2019 to 2020 have been analyzed to check the effects of the COVID-19 pandemic lockdown. 
    more » « less
  4. null (Ed.)
    The COVID-19 pandemic has highlighted the importance of diagnosis and monitoring as early and accurately as possible. However, the reverse-transcription polymerase chain reaction (RT-PCR) test results in two issues: (1) protracted turnaround time from sample collection to testing result and (2) compromised test accuracy, as low as 67%, due to when the test is administered and due to how the samples are collected, handled, and delivered to the lab to conduct the RT-PCR test. Thus, we present ComputeCOVID19+, our computed tomography-based framework to improve the testing speed and accuracy of COVID-19 (plus its variants) via a deep learning-based network for CT image enhancement called DDnet. To demonstrate its speed and accuracy, we evaluate ComputeCOVID19+ across many sources of computed tomography (CT) images and on many heterogeneous platforms, including multi-core CPU, many-core GPU, and even FPGA. Our results show that ComputeCOVID19+ can significantly shorten the turnaround time from days to minutes and improve the testing accuracy to 91%. 
    more » « less
  5. An increasingly-popular treatment for ablation of cancerous and non-cancerous masses is thermal ablation by radiofrequency joule heating. Real-time monitoring of the thermal tissue ablation process is essential in order to maintain the reliability of the treatment technique. Common methods for monitoring the extent of ablation have proven to be accurate, though they are time-consuming and often require powerful computers to run on, which makes the clinical ablation process more cumbersome and expensive due to the time-dependent nature of the clinical procedure. In this study, a Machine Learning (ML) approach is presented to reduce the time to calculate the progress of ablation while keeping the accuracy of the conventional methods. Different setups were used to perform the ablation and collect impedance data at the same time and different ML algorithms were tested to predict the ablation depth in three dimensions, based on the collected data. In the end, it is shown that an optimal pair of hardware setup and ML algorithm were able to control the ablation by estimating the lesion depth within an average of micrometer-magnitude error range while keeping the estimation time within 5.5 s on conventional x86-64 computing hardware. 
    more » « less