skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: EVALUATION OF U-NET FOR TIME-DOMAIN FULL WAVEFORM INVERSION IMPROVEMENT
Ultrasound computed tomography (USCT) is one of the advanced imaging techniques used in structural health monitoring (SHM) and medical imaging due to its relatively low-cost, rapid data acquisition process. The time-domain full waveform inversion (TDFWI) method, an iterative inversion approach, has shown great promise in USCT. However, such an iterative process can be very time-consuming and computationally expensive but can be greatly accelerated by integrating an AI-based approach (e.g., convolution neural network (CNN)). Once trained, the CNN model takes low-iteration TDFWI images as input and instantaneously predicts material property distribution within the scanned region. Nevertheless, the quality of the reconstruction with the current CNN degrades with the increased complexity of material distributions. Another challenge is the availability of enough experimental data and, in some cases, even synthetic surrogate data. To alleviate these issues, this paper details a systematic study of the enhancement effect of a 2D CNN (U-Net) by improving the quality with limited training data. To achieve this, different augmentation schemes (flipping and mixing existing data) were implemented to increase the amount and complexity of the training datasets without generating a substantial number of samples. The objective was to evaluate the enhancement effect of these augmentation techniques on the performance of the U-Net model at FWI iterations. A thousand numerically built samples with acoustic material properties are used to construct multiple datasets from different FWI iterations. A parallelized, high-performance computing (HPC) based framework has been created to rapidly generate the training data. The prediction results were compared against the ground truth images using standard matrices, such as the structural similarity index measure (SSIM) and average mean square error (MSE). The results show that the increased number of samples from augmentations improves shape imaging of the complex regions even with a low iteration FWI training data.  more » « less
Award ID(s):
2152765
PAR ID:
10534972
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Destech Publications, Inc.
Date Published:
ISBN:
9781605956930
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we present a framework to learn illumination patterns to improve the quality of signal recovery for coded diffraction imaging. We use an alternating minimization-based phase retrieval method with a fixed number of iterations as the iterative method. We represent the iterative phase retrieval method as an unrolled network with a fixed number of layers where each layer of the network corresponds to a single step of iteration, and we minimize the recovery error by optimizing over the illumination patterns. Since the number of iterations/layers is fixed, the recovery has a fixed computational cost. Extensive experimental results on a variety of datasets demonstrate that our proposed method significantly improves the quality of image reconstruction at a fixed computational cost with illumination patterns learned only using a small number of training images. 
    more » « less
  2. SUMMARY Non-invasive subsurface imaging using full waveform inversion (FWI) has the potential to fundamentally change near-surface (<30 m) site characterization by enabling the recovery of high-resolution (metre-scale) 2-D/3-D maps of subsurface elastic material properties. Yet, FWI results are quite sensitive to their starting model due to their dependence on local-search optimization techniques and inversion non-uniqueness. Starting model dependence is particularly problematic for near-surface FWI due to the complexity of the recorded seismic wavefield (e.g. dominant surface waves intermixed with body waves) and the potential for significant spatial variability over short distances. In response, convolutional neural networks (CNNs) are investigated as a potential tool for developing starting models for near-surface 2-D elastic FWI. Specifically, 100 000 subsurface models were generated to be representative of a classic near-surface geophysics problem; namely, imaging a two-layer, undulating, soil-over-bedrock interface. A CNN has been developed from these synthetic models that is capable of transforming an experimental wavefield acquired using a seismic source located at the centre of a linear array of 24 closely spaced surface sensors directly into a robust starting model for FWI. The CNN approach was able to produce 2-D starting models with seismic image misfits that were significantly less than the misfits from other common starting model approaches, and in many cases even less than the misfits obtained by FWI with inferior starting models. The ability of the CNN to generalize outside its two-layered training set was assessed using a more complex, three-layered, soil-over-bedrock formation. While the predictive ability of the CNN was slightly reduced for this more complex case, it was still able to achieve seismic image and waveform misfits that were comparable to other commonly used starting models, despite not being trained on any three-layered models. As such, CNNs show great potential as tools for rapidly developing robust, site-specific starting models for near-surface elastic FWI. 
    more » « less
  3. Data fusion techniques have gained special interest in remote sensing due to the available capabilities to obtain measurements from the same scene using different instruments with varied resolution domains. In particular, multispectral (MS) and hyperspectral (HS) imaging fusion is used to generate high spatial and spectral images (HSEI). Deep learning data fusion models based on Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNN) have been developed to achieve such task.In this work, we present a Multi-Level Propagation Learning Network (MLPLN) based on a LSTM model but that can be trained with variable data sizes in order achieve the fusion process. Moreover, the MLPLN provides an intrinsic data augmentation feature that reduces the required number of training samples. The proposed model generates a HSEI by fusing a high-spatial resolution MS image and a low spatial resolution HS image. The performance of the model is studied and compared to existing CNN and LSTM approaches by evaluating the quality of the fused image using the structural similarity metric (SSIM). The results show that an increase in the SSIM is still obtained while reducing of the number of training samples to train the MLPLN model. 
    more » « less
  4. Abstract Ultrasound computed tomography (USCT) shows great promise in nondestructive evaluation and medical imaging due to its ability to quickly scan and collect data from a region of interest. However, existing approaches are a tradeoff between the accuracy of the prediction and the speed at which the data can be analyzed, and processing the collected data into a meaningful image requires both time and computational resources. We propose to develop convolutional neural networks (CNNs) to accelerate and enhance the inversion results to reveal underlying structures or abnormalities that may be located within the region of interest. For training, the ultrasonic signals were first processed using the full waveform inversion (FWI) technique for only a single iteration; the resulting image and the corresponding true model were used as the input and output, respectively. The proposed machine learning approach is based on implementing two-dimensional CNNs to find an approximate solution to the inverse problem of a partial differential equation-based model reconstruction. To alleviate the time-consuming and computationally intensive data generation process, a high-performance computing-based framework has been developed to generate the training data in parallel. At the inference stage, the acquired signals will be first processed by FWI for a single iteration; then the resulting image will be processed by a pre-trained CNN to instantaneously generate the final output image. The results showed that once trained, the CNNs can quickly generate the predicted wave speed distributions with significantly enhanced speed and accuracy. 
    more » « less
  5. null (Ed.)
    This paper presents a policy-driven sequential image augmentation approach for image-related tasks. Our approach applies a sequence of image transformations (e.g., translation, rotation) over a training image, one transformation at a time, with the augmented image from the previous time step treated as the input for the next transformation. This sequential data augmentation substantially improves sample diversity, leading to improved test performance, especially for data-hungry models (e.g., deep neural networks). However, the search for the optimal transformation of each image at each time step of the sequence has high complexity due to its combination nature. To address this challenge, we formulate the search task as a sequential decision process and introduce a deep policy network that learns to produce transformations based on image content. We also develop an iterative algorithm to jointly train a classifier and the policy network in the reinforcement learning setting. The immediate reward of a potential transformation is defined to encourage transformations producing hard samples for the current classifier. At each iteration, we employ the policy network to augment the training dataset, train a classifier with the augmented data, and train the policy net with the aid of the classifier. We apply the above approach to both public image classification benchmarks and a newly collected image dataset for material recognition. Comparisons to alternative augmentation approaches show that our policy-driven approach achieves comparable or improved classification performance while using significantly fewer augmented images. The code is available at https://github.com/Paul-LiPu/rl_autoaug. 
    more » « less