skip to main content


Title: State Estimation—The Role of Reduced Models
The exploration of complex physical or technological processes usually requires exploiting available information from different sources: (i) physical laws often represented as a family of parameter dependent partial differential equations and (ii) data provided by measurement devices or sensors. The amount of sensors is typically limited and data acquisition may be expensive and in some cases even harmful. This article reviews some recent developments for this “small-data” scenario where inversion is strongly aggravated by the typically large parametric dimension- ality. The proposed concepts may be viewed as exploring alternatives to Bayesian inversion in favor of more deterministic accuracy quantification related to the required computational complexity. We discuss optimality criteria which delineate intrinsic information limits, and highlight the role of reduced models for developing efficient computational strategies. In particular, the need to adapt the reduced models—not to a specific (possibly noisy) data set but rather to the sensor system—is a central theme. This, in turn, is facilitated by exploiting geometric perspectives based on proper stable variational formulations of the continuous model.  more » « less
Award ID(s):
1720297
NSF-PAR ID:
10333094
Author(s) / Creator(s):
; ;
Editor(s):
Rebollo, Tomás C.; Donat, Rosa; Higueras, Inmaculada
Date Published:
Journal Name:
SEMA SIMAI Springer series
Volume:
1
ISSN:
2199-3041
Page Range / eLocation ID:
57-77
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Predicting the process of porosity-based ductile damage in polycrystalline metallic materials is an essential practical topic. Ductile damage and its precursors are represented by extreme values in stress and material state quantities, the spatial probability density function (PDF) of which are highly non-Gaussian with strong fat tails. Traditional deterministic forecasts utilizing sophisticated continuum-based physical models generally lack in representing the statistics of structural evolution during material deformation. Computational tools which do represent complex structural evolution are typically expensive. The inevitable model error and the lack of uncertainty quantification may also induce significant forecast biases, especially in predicting the extreme events associated with ductile damage. In this paper, a data-driven statistical reduced-order modeling framework is developed to provide a probabilistic forecast of the deformation process of a polycrystal aggregate leading to porosity-based ductile damage with uncertainty quantification. The framework starts with computing the time evolution of the leading few moments of specific state variables from the spatiotemporal solution of full- field polycrystal simulations. Then a sparse model identification algorithm based on causation entropy, including essential physical constraints, is utilized to discover the governing equations of these moments. An approximate solution of the time evolution of the PDF is obtained from the predicted moments exploiting the maximum entropy principle. Numerical experiments based on polycrystal realizations of a representative body-centered cubic (BCC) tantalum illustrate a skillful reduced-order model in characterizing the time evolution of the non-Gaussian PDF of the von Mises stress and quantifying the probability of extreme events. The learning process also reveals that the mean stress is not simply an additive forcing to drive the higher-order moments and extreme events. Instead, it interacts with the latter in a strongly nonlinear and multiplicative fashion. In addition, the calibrated moment equations provide a reasonably accurate forecast when applied to the realizations outside the training data set, indicating the robustness of the model and the skill for extrapolation. Finally, an information-based measurement is employed to quantitatively justify that the leading four moments are sufficient to characterize the crucial highly non-Gaussian features throughout the entire deformation history considered. 
    more » « less
  2. Abstract. Atmospheric inverse modeling describes the process of estimating greenhouse gas fluxes or air pollution emissions at the Earth's surface using observations of these gases collected in the atmosphere. The launch of new satellites, the expansion of surface observation networks, and a desire for more detailed maps of surface fluxes have yielded numerous computational and statistical challenges for standard inverse modeling frameworks that were often originally designed with much smaller data sets in mind. In this article, we discuss computationally efficient methods for large-scale atmospheric inverse modeling and focus on addressing some of the main computational and practical challenges. We develop generalized hybrid projection methods, which are iterative methods for solving large-scale inverse problems, and specifically we focus on the case of estimating surface fluxes. These algorithms confer several advantages. They are efficient, in part because they converge quickly, they exploit efficient matrix–vector multiplications, and they do not require inversion of any matrices. These methods are also robust because they can accurately reconstruct surface fluxes, they are automatic since regularization or covariance matrix parameters and stopping criteria can be determined as part of the iterative algorithm, and they are flexible because they can be paired with many different types of atmospheric models. We demonstrate the benefits of generalized hybrid methods with a case study from NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite. We then address the more challenging problem of solving the inverse model when the mean of the surface fluxes is not known a priori; we do so by reformulating the problem, thereby extending the applicability of hybrid projection methods to include hierarchical priors. We further show that by exploiting mathematical relations provided by the generalized hybrid method, we can efficiently calculate an approximate posterior variance, thereby providing uncertainty information. 
    more » « less
  3. Abstract

    Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically requiremodel runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires onlymodel runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter‐to‐data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate‐emulate‐sample (CES) approach in a perfect‐model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties.

     
    more » « less
  4. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less
  5. We present a new method to obtain dynamic body force at virtual interfaces to reconstruct shear wave motions induced by a source outside a truncated computational domain. Specifically, a partial differential equation (PDE)-constrained optimization method is used to minimize the misfit between measured motions at a limited number of sensors on the ground surface and their counterparts reconstructed from optimized forces. Numerical results show that the optimized forces accurately reconstruct the targeted ground motions in the surface and the interior of the domain. The proposed optimization framework yields a particular force vector among other valid solutions allowed by the domain reduction method (DRM). Per this optimized or inverted force vector, the reconstructed wave field is identical to its reference counterpart in the domain of interest but may differ in the exterior domain from the reference one. However, we remark that the inverted solution is valid and introduce a simple post-process that can modify the solution to achieve an alternative force vector corresponding to the reference wave field. We also study the desired sensor spacing to accurately reconstruct the wave responses for a given dominant frequency of interest. We remark that the presented method is omnidirectionally applicable in terms of the incident angle of an incoming wave and is effective for any given material heterogeneity and geometry of layering of a reduced domain. The presented inversion method requires information on the wave speeds and dimensions of only a reduced domain. Namely, it does not need any informa- tion on the geophysical profile of an enlarged domain or a seismic source profile outside a reduced domain. Thus, the computational cost of the method is compact even though it leads to the high-fidelity reconstruction of wave re- sponse in the reduced domain, allowing for studying and predicting ground and structural responses using real seismic measurements. 
    more » « less