skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reconstruction of Sparsely Sampled Seismic Data via Residual U-Net
Reconstruction of sparsely sampled seismic data is critical for maintaining the quality of seismic images when significant numbers of shots and receivers are missing.We present a reconstruction method in the shot-receiver-time (SRT) domain based on a residual U-Net machine learning architecture, for seismic data acquired in a sparse 2-D acquisition and name it SRT2D-ResU-Net. The SRT domain retains a high level of seismic signal connectivity, which is likely the main data feature that the reconstructing algorithms rely on. We develop an “in situ training and prediction” workflow by dividing the acquisition area into two nonoverlapping subareas: a training subarea for establishing the network model using regularly sampled data and a testing subarea for reconstructing the sparsely sampled data using the trained model. To establish a reference base for analyzing the changes in data features over the study area, and quantifying the reconstructed seismic data, we devise a baseline reference using a tiny portion of the field data. The baselines are properly spaced and excluded from the training and reconstruction processes. The results on a field marine data set show that the SRT2D-ResU-Net can effectively learn the features of seismic data in the training process, and the average correlation between the reconstructed missing traces and the true answers is over 85%.  more » « less
Award ID(s):
1832197
PAR ID:
10296119
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Geoscience and Remote Sensing Letters
ISSN:
1545-598X
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a new method to obtain dynamic body force at virtual interfaces to reconstruct shear wave motions induced by a source outside a truncated computational domain. Specifically, a partial differential equation (PDE)-constrained optimization method is used to minimize the misfit between measured motions at a limited number of sensors on the ground surface and their counterparts reconstructed from optimized forces. Numerical results show that the optimized forces accurately reconstruct the targeted ground motions in the surface and the interior of the domain. The proposed optimization framework yields a particular force vector among other valid solutions allowed by the domain reduction method (DRM). Per this optimized or inverted force vector, the reconstructed wave field is identical to its reference counterpart in the domain of interest but may differ in the exterior domain from the reference one. However, we remark that the inverted solution is valid and introduce a simple post-process that can modify the solution to achieve an alternative force vector corresponding to the reference wave field. We also study the desired sensor spacing to accurately reconstruct the wave responses for a given dominant frequency of interest. We remark that the presented method is omnidirectionally applicable in terms of the incident angle of an incoming wave and is effective for any given material heterogeneity and geometry of layering of a reduced domain. The presented inversion method requires information on the wave speeds and dimensions of only a reduced domain. Namely, it does not need any informa- tion on the geophysical profile of an enlarged domain or a seismic source profile outside a reduced domain. Thus, the computational cost of the method is compact even though it leads to the high-fidelity reconstruction of wave re- sponse in the reduced domain, allowing for studying and predicting ground and structural responses using real seismic measurements. 
    more » « less
  2. Purpose: Magnetic Resonance Imaging (MRI) enables non‐invasive assessment of brain abnormalities during early life development. Permanent magnet scanners operating in the neonatal intensive care unit (NICU) facilitate MRI of sick infants, but have long scan times due to lower signal‐to‐noise ratios (SNR) and limited receive coils. This work accelerates in‐NICU MRI with diffusion probabilistic generative models by developing a training pipeline accounting for these challenges. Methods: We establish a novel training dataset of clinical, 1 Tesla neonatal MR images in collaboration with Aspect Imaging and Sha'are Zedek Medical Center. We propose a pipeline to handle the low quantity and SNR of our real‐world dataset (1) modifying existing network architectures to support varying resolutions; (2) training a single model on all data with learned class embedding vectors; (3) applying self‐supervised denoising before training; and (4) reconstructing by averaging posterior samples. Retrospective under‐sampling experiments, accounting for signal decay, evaluated each item of our proposed methodology. A clinical reader study with practicing pediatric neuroradiologists evaluated our proposed images reconstructed from under‐sampled data. Results: Combining all data, denoising pre‐training, and averaging posterior samples yields quantitative improvements in reconstruction. The generative model decouples the learned prior from the measurement model and functions at two acceleration rates without re‐training. The reader study suggests that proposed images reconstructed from under‐sampled data are adequate for clinical use. Conclusion: Diffusion probabilistic generative models applied with the proposed pipeline to handle challenging real‐world datasets could reduce the scan time of in‐NICU neonatal MRI. 
    more » « less
  3. Abstract PurposeTo introduce a novel deep model‐based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k‐space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. MethodsSPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high‐quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA‐based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self‐supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. ResultsWe validate SPICER on both open‐access datasets and experimentally collected data, showing that it can achieve state‐of‐the‐art performance in highly accelerated data acquisition settings (up to ). Our results also highlight the importance of different modules of SPICER—including the DMBA, the CSM estimation, and the SPICER training loss—on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre‐estimation methods especially when the ACS data is limited. ConclusionDespite being trained on noisy undersampled data, SPICER can reconstruct high‐quality images and CSMs in highly undersampled settings, which outperforms other self‐supervised learning methods and matches the performance of the well‐known E2E‐VarNet trained on fully sampled ground‐truth data. 
    more » « less
  4. PurposeTo develop a strategy for training a physics‐guided MRI reconstruction neural network without a database of fully sampled data sets. MethodsSelf‐supervised learning via data undersampling (SSDU) for physics‐guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground‐truth data, as well as conventional compressed‐sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics‐guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two‐fold accelerated high‐resolution brain data sets at different acceleration rates, and compared with parallel imaging. ResultsResults on five different knee sequences at an acceleration rate of 4 shows that the proposed self‐supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed‐sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground‐truth reference, show that the proposed self‐supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. ConclusionThe proposed SSDU approach allows training of physics‐guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data. 
    more » « less
  5. Graph federated learning is of essential importance for training over large graph datasets while protecting data privacy, where each client stores a subset of local graph data, while the server collects the local gradients and broadcasts only the aggregated gradients. Recent studies reveal that a malicious attacker can steal private image data from the gradient exchange of neural networks during federated learning. However, the vulnerability of graph data and graph neural networks under such attacks, i.e., reconstructing both node features and graph structure from gradients, remains largely under-explored. To answer this question, this paper studies the problem of whether private data can be reconstructed from leaked gradients in both node classification and graph classification tasks and proposes a novel attack named Graph Leakage from Gradients (GLG). Two widely used GNN frameworks are analyzed, namely GCN and GraphSAGE. The effects of different model settings on reconstruction are extensively discussed. Theoretical analysis and empirical validation demonstrate that, by leveraging the unique properties of graph data and GNNs, GLG achieves more accurate reconstruction of both nodal features and graph structure from gradients. 
    more » « less