skip to main content


Title: Domain Adaptation for Ultrasound Beamforming
Ultrasound B-Mode images are created from data obtained from each element in the transducer array in a process called beamforming. The beamforming goal is to enhance signals from specified spatial locations, while reducing signal from all other locations. On clinical systems, beamforming is accomplished with the delay-and-sum (DAS) algorithm. DAS is efficient but fails in patients with high noise levels, so various adaptive beamformers have been proposed. Recently, deep learning methods have been developed for this task. With deep learning methods, beamforming is typically framed as a regression problem, where clean, ground-truth data is known, and usually simulated. For in vivo data, however, it is extremely difficult to collect ground truth information, and deep networks trained on simulated data underperform when applied to in vivo data, due to domain shift between simulated and in vivo data. In this work, we show how to correct for domain shift by learning deep network beamformers that leverage both simulated data, and unlabeled in vivo data, via a novel domain adaption scheme. A challenge in our scenario is that domain shift exists both for noisy input, and clean output. We address this challenge by extending cycle-consistent generative adversarial networks, where we leverage maps between synthetic simulation and real in vivo domains to ensure that the learned beamformers capture the distribution of both noisy and clean in vivo data. We obtain consistent in vivo image quality improvements compared to existing beamforming techniques, when applying our approach to simulated anechoic cysts and in vivo liver data.  more » « less
Award ID(s):
1750994
NSF-PAR ID:
10216938
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
MICCAI 2020: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020
Volume:
12262
Page Range / eLocation ID:
410-420
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unsupervised denoising is a crucial challenge in real-world imaging applications. Unsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise. However, no metrics are available to evaluate these methods in an unsupervised fashion. This is highly problematic for the many practical applications where ground-truth clean images are not available. In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signal-to-noise ratio (PSNR), which are computed using only noisy data. We provide a theoretical analysis of these metrics, showing that they are asymptotically consistent estimators of the supervised MSE and PSNR. Controlled numerical experiments with synthetic noise confirm that they provide accurate approximations in practice. We validate our approach on real-world data from two imaging modalities: videos in raw format and transmission electron microscopy. Our results demonstrate that the proposed metrics enable unsupervised evaluation of denoising methods based exclusively on noisy data. 
    more » « less
  2. Unsupervised denoising is a crucial challenge in real-world imaging applications. Unsupervised deep-learning methods have demonstrated impressive performance on benchmarks based on synthetic noise. However, no metrics exist to evaluate these methods in an unsupervised fashion. This is highly problematic for the many practical applications where ground-truth clean images are not available. In this work, we propose two novel metrics: the unsupervised mean squared error (MSE) and the unsupervised peak signalto-noise ratio (PSNR), which are computed using only noisy data. We provide a theoretical analysis of these metrics, showing that they are asymptotically consistent estimators of the supervised MSE and PSNR. Controlled numerical experiments with synthetic noise confirm that they provide accurate approximations in practice. We validate our approach on real-world data from two imaging modalities: videos in raw format and transmission electron microscopy. Our results demonstrate that the proposed metrics enable unsupervised evaluation of denoising methods based exclusively on noisy data. 
    more » « less
  3. Abstract Background

    Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy.

    Methods

    We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients.

    Results

    Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187).

    Conclusions

    DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.

     
    more » « less
  4. We evaluated training deep neural network (DNN) beamformers for the task of high contrast imaging in the presence of reverberation clutter. Training data was generated using simulated hypoechoic cysts and a pseudo nonlinear method for generating reverberation clutter. Performance was compared to standard delay-and-sum (DAS) beamforming on simulated hypoechoic cysts having a different size. For a hypoechoic cyst in the presence of reverberation clutter, when the intrinsic contrast ratio (CR) was -10 dB and -20 dB, the measured CR for DAS beamforming was -9.2±0.8 dB and -14.3±0.5 dB, respectively, and the measured CR for DNNs was -10.7±1.4 dB and -20.0±1.0 dB, respectively. For a hypoechoic cyst with -20 dB intrinsic CR, the contrast-to-noise ratio (CNR) was 3.4±0.3 dB and 4.3±0.3 dB for DAS and DNN beamforming, respectively. These results show that DNN beamforming was able to extend contrast ratio dynamic range (CRDR) by about 10 dB while also improving CNR. 
    more » « less
  5. Complex biological tissues consist of numerous cells in a highly coordinated manner and carry out various biological functions. Therefore, segmenting a tissue into spatial and functional domains is critically important for understanding and controlling the biological functions. The emerging spatial transcriptomic technologies allow simultaneous measurements of thousands of genes with precise spatial information, providing an unprecedented opportunity for dissecting biological tissues. However, how to utilize such noisy, sparse, and high dimensional data for tissue segmentation remains a major challenge. Here, we develop a deep learning-based method, named SCAN-IT by transforming the spatial domain identification problem into an image segmentation problem, with cells mimicking pixels and expression values of genes within a cell representing the color channels. Specifically, SCAN-IT relies on geometric modeling, graph neural networks, and an informatics approach, DeepGraphInfomax. We demonstrate that SCAN-IT can handle datasets from a wide range of spatial transcriptomics techniques, including the ones with high spatial resolution but low gene coverage as well as those with low spatial resolution but high gene coverage. We show that SCAN-IT outperforms state-of-the-art methods using a benchmark dataset with ground truth domain annotations. 
    more » « less