skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reduced Order Modeling Inversion of Monostatic Data in a Multi-scattering Environment
Data-driven reduced order models (ROMs) have recently emerged as an efcient tool for the solution of inverse scattering problems with applications to seismic and sonar imaging. One requirement of this approach is that it uses the full square multiple-input/multiple-output (MIMO) matrixvalued transfer function as the data for multidimensional problems. The synthetic aperture radar (SAR), however, is limited to the single-input/single-output (SISO) measurements corresponding to the diagonal of the matrix transfer function. Here we present a ROM-based Lippmann-Schwinger approach overcoming this drawback. The ROMs are constructed to match the data for each source-receiver pair separately, and these are used to construct internal solutions for the corresponding source using only the data-driven Gramian. Efficiency of the proposed approach is demonstrated on 2D and 2.5D (3D propagation and 2D reflectors) numerical examples. The new algorithm not only suppresses multiple echoes seen in the Born imaging but also takes advantage of their illumination of some back sides of the reflectors, improving the quality of their mapping.  more » « less
Award ID(s):
2110773 2008441
PAR ID:
10523656
Author(s) / Creator(s):
; ;
Publisher / Repository:
SIAM
Date Published:
Journal Name:
SIAM Journal on Imaging Sciences
Volume:
17
Issue:
1
ISSN:
1936-4954
Page Range / eLocation ID:
334 to 350
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Data-driven reduced order models (ROMs) recently emerged as powerful tool for the solution of inverse scattering problems. The main drawback of this approach is that it was limited to measurement arrays with reciprocally collocated transmitters and receivers, that is, square symmetric matrix (data) transfer functions. To relax this limitation, we use our previous work Druskin et al (2021 Inverse Problems 37 075003), where the ROMs were combined with the Lippmann–Schwinger integral equation to produce a direct nonlinear inversion method. In this work we extend this approach to more general transfer functions, including those that are non-symmetric, e.g., obtained by adding only receivers or sources. The ROM is constructed based on the symmetric subset of the data and is used to construct all internal solutions. Remaining receivers are then used directly in the Lippmann–Schwinger equation. We demonstrate the new approach on a number of 1D and 2D examples with non-reciprocal arrays, including a single input/multiple outputs inverse problem, where the data is given by just a single-row matrix transfer function. This allows us to approach the flexibility of the Born approximation in terms of acceptable measurement arrays; at the same time significantly improving the quality of the inversion compared to the latter for strongly nonlinear scattering effects. 
    more » « less
  2. This dataset is associated with a manuscript on river plumes and idealized coastal corners with first author Michael M. Whitney. The dataset includes source code, compilation files, and routines to generate input files for the Regional Ocean Modeling System (ROMS) runs used in this study. ROMS output files in NetCDF format are generated by executing the compiled ROMS code with the input files. The dataset also includes MATLAB routines and datafiles for the analysis of model results and generation of figures in the manuscript. The following zip files are included: ROMS_v783_Yan_code.zip [ROMS source code branch used in this study] coastalcorner_ROMS_compilation.zip [files to compile ROMS source code and run-specific Fortran-90 built code] coastalcorner_ROMS_input_generate_MATLAB.zip [ROMS ASCII input file and MATLAB routines to generate ROMS NetCDF input files for runs] coastalcorner_MATLAB_output_analysis.zip [MATLAB data files with selected ROMS output fields and custom analysis routines and datafiles in MATLAB formats used in this study] coastalcorner_MATLAB_figures.zip [custom MATLAB routine for manuscript figure generation and MATLAB data files with all data fields included in figures] coastalcorner_tif_figures.zip [TIF image files of each figure in manuscript] 
    more » « less
  3. Abstract BackgroundDue to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non‐trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept offusion qualityfor deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real‐world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver. PurposeTo develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data. MethodsOur approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source‐bias. To minimize this source‐bias and optimize CNN performance, we implement a vector‐growing encoding scheme called positional encoding, where low‐dimensional clinical data are transcribed into a rich feature space that complements high‐dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet‐152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework. ResultsNumerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source‐specific probability density function when source data are improperly fused. Our numerical results demonstrate that this phenomenon can be quantified as a measure of fusion quality. On patient data, the fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging‐only model (AUC = 0.60; accuracy = 0.62), the blood marker‐only model (AUC = 0.58; accuracy = 0.60), and a variety of purposely sub‐optimized fusion models (AUC = 0.61–0.70; accuracy = 0.58–0.69). ConclusionsWe introduced the concept of data fusion quality for multi‐source deep learning problems involving both imaging and clinical data. We provided a theoretical framework, numerical validation, and real‐world application in abdominal radiology. Our data suggests that CT imaging and hepatic blood markers provide complementary diagnostic information when appropriately fused. 
    more » « less
  4. We consider a distributed function computation problem in which parties observing noisy versions of a remote source facilitate the computation of a function of their observations at a fusion center through public communication. The distributed function computation is subject to constraints, including not only reliability and storage but also privacy and secrecy. Specifically, 1) the remote source should remain private from an eavesdropper and the fusion center, measured in terms of the information leaked about the remote source; 2) the function computed should remain secret from the eavesdropper, measured in terms of the information leaked about the arguments of the function, to ensure secrecy regardless of the exact function used. We derive the exact rate regions for lossless and lossy single-function computation and illustrate the lossy single-function computation rate region for an information bottleneck example, in which the optimal auxiliary random variables are characterized for binary input symmetric output channels. We extend the approach to lossless and lossy asynchronous multiple-function computations with joint secrecy and privacy constraints, in which case inner and outer bounds for the rate regions differing only in the Markov chain conditions imposed are characterized. 
    more » « less
  5. Deep learning algorithms have been moderately successful in diagnoses of diseases by analyzing medical images especially through neuroimaging that is rich in annotated data. Transfer learning methods have demonstrated strong performance in tackling annotated data. It utilizes and transfers knowledge learned from a source domain to target domain even when the dataset is small. There are multiple approaches to transfer learning that result in a range of performance estimates in diagnosis, detection, and classification of clinical problems. Therefore, in this paper, we reviewed transfer learning approaches, their design attributes, and their applications to neuroimaging problems. We reviewed two main literature databases and included the most relevant studies using predefined inclusion criteria. Among 50 reviewed studies, more than half of them are on transfer learning for Alzheimer's disease. Brain mapping and brain tumor detection were second and third most discussed research problems, respectively. The most common source dataset for transfer learning was ImageNet, which is not a neuroimaging dataset. This suggests that the majority of studies preferred pre-trained models instead of training their own model on a neuroimaging dataset. Although, about one third of studies designed their own architecture, most studies used existing Convolutional Neural Network architectures. Magnetic Resonance Imaging was the most common imaging modality. In almost all studies, transfer learning contributed to better performance in diagnosis, classification, segmentation of different neuroimaging diseases and problems, than methods without transfer learning. Among different transfer learning approaches, fine-tuning all convolutional and fully-connected layers approach and freezing convolutional layers and fine-tuning fully-connected layers approach demonstrated superior performance in terms of accuracy. These recent transfer learning approaches not only show great performance but also require less computational resources and time. 
    more » « less