skip to main content


Title: A Nonlinear Rank Regression Method for Ensemble Kalman Filter Data Assimilation
Abstract

It is possible to describe many variants of ensemble Kalman filters without loss of generality as the impact of a single observation on a single state variable. For most ensemble algorithms commonly applied to Earth system models, the computation of increments for the observation variable ensemble can be treated as a separate step from computing increments for the state variable ensemble. The state variable increments are normally computed from the observation increments by linear regression using the prior bivariate ensemble of the state and observation variable. Here, a new method that replaces the standard regression with a regression using the bivariate rank statistics is described. This rank regression is expected to be most effective when the relation between a state variable and an observation is nonlinear. The performance of standard versus rank regression is compared for both linear and nonlinear forward operators (also known as observation operators) using a low-order model. Rank regression in combination with a rank histogram filter in observation space produces better analyses than standard regression for cases with nonlinear forward operators and relatively large analysis error. Standard regression, in combination with either a rank histogram filter or an ensemble Kalman filter in observation space, produces the best results in other situations.

 
more » « less
NSF-PAR ID:
10120001
Author(s) / Creator(s):
 
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Monthly Weather Review
Volume:
147
Issue:
8
ISSN:
0027-0644
Page Range / eLocation ID:
p. 2847-2860
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Traditional ensemble Kalman filter data assimilation methods make implicit assumptions of Gaussianity and linearity that are strongly violated by many important Earth system applications. For instance, bounded quantities like the amount of a tracer and sea ice fractional coverage cannot be accurately represented by a Gaussian that is unbounded by definition. Nonlinear relations between observations and model state variables abound. Examples include the relation between a remotely sensed radiance and the column of atmospheric temperatures, or the relation between cloud amount and water vapor quantity. Part I of this paper described a very general data assimilation framework for computing observation increments for non-Gaussian prior distributions and likelihoods. These methods can respect bounds and other non-Gaussian aspects of observed variables. However, these benefits can be lost when observation increments are used to update state variables using the linear regression that is part of standard ensemble Kalman filter algorithms. Here, regression of observation increments is performed in a space where variables are transformed by the probit and probability integral transforms, a specific type of Gaussian anamorphosis. This method can enforce appropriate bounds for all quantities and deal much more effectively with nonlinear relations between observations and state variables. Important enhancements like localization and inflation can be performed in the transformed space. Results are provided for idealized bivariate distributions and for cycling assimilation in a low-order dynamical system. Implications for improved data assimilation across Earth system applications are discussed.

     
    more » « less
  2. Abstract

    Linear transformations are widely used in data assimilation for covariance modeling, for reducing dimensionality (such as averaging dense observations to form “superobs”), and for managing sampling error in ensemble data assimilation. Here we describe a linear transformation that is optimal in the sense that, in the transformed space, the state variables and observations have uncorrelated errors, and a diagonal gain matrix in the update step. We conjecture, and provide numerical evidence, that the transformation is the best possible to precede covariance localization in an ensemble Kalman filter. A central feature of this transformation in the update step are scalars, which we term canonical observation operators (COOs), that relate pairs of transformed observations and state variables and rank‐order those pairs by their influence in the update. We show for an idealized problem that sample‐based estimates of the COOs, in conjunction with covariance localization for the sample covariance, can approximate well the true values, but a practical implementation of the transformation for high‐dimensional applications remains a subject for future research. The COOs also completely describe important properties of the update step, such as observation‐state mutual information, signal‐to‐noise and degrees of freedom for signal, and so give new insights, including relations among reduced‐rank approximations to variational schemes, particle‐filter weight degeneracy, and the local ensemble transform Kalman filter.

     
    more » « less
  3. Abstract We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O ( 1 0 4 ) model runs, or more. Moreover, the forward model is often given as a black box or is impractical to differentiate. Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems. The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator. Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it. This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior. Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach. The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model. Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically. The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems. 
    more » « less
  4. Abstract

    Iterative ensemble filters and smoothers are now commonly used for geophysical models. Some of these methods rely on a factorization of the observation likelihood function to sample from a posterior density through a set of “tempered” transitions to ensemble members. For Gaussian‐based data assimilation methods, tangent linear versions of nonlinear operators can be relinearized between iterations, thus leading to a solution that is less biased than a single‐step approach. This study adopts similar iterative strategies for a localized particle filter (PF) that relies on the estimation of moments to adjust unobserved variables based on importance weights. This approach builds off a “regularization” of the local PF, which forces weights to be more uniform through heuristic means. The regularization then leads to an adaptive tempering, which can also be combined with filter updates from parametric methods, such as ensemble Kalman filters. The role of iterations is analyzed by deriving the localized posterior probability density assumed by current local PF formulations and then examining how single‐step and tempered PFs sample from this density. From experiments performed with a low‐dimensional nonlinear system, the iterative and hybrid strategies show the largest benefits in observation‐sparse regimes, where only a few particles contain high likelihoods and prior errors are non‐Gaussian. This regime mimics specific applications in numerical weather prediction, where small ensemble sizes, unresolved model error, and highly nonlinear dynamics lead to prior uncertainty that is larger than measurement uncertainty.

     
    more » « less
  5. Abstract Particle filters avoid parametric estimates for Bayesian posterior densities, which alleviates Gaussian assumptions in nonlinear regimes. These methods, however, are more sensitive to sampling errors than Gaussian-based techniques such as ensemble Kalman filters. A recent study by the authors introduced an iterative strategy for particle filters that match posterior moments—where iterations improve the filter’s ability to draw samples from non-Gaussian posterior densities. The iterations follow from a factorization of particle weights, providing a natural framework for combining particle filters with alternative filters to mitigate the impact of sampling errors. The current study introduces a novel approach to forming an adaptive hybrid data assimilation methodology, exploiting the theoretical strengths of nonparametric and parametric filters. At each data assimilation cycle, the iterative particle filter performs a sequence of updates while the prior sample distribution is non-Gaussian, then an ensemble Kalman filter provides the final adjustment when Gaussian distributions for marginal quantities are detected. The method employs the Shapiro–Wilk test to determine when to make the transition between filter algorithms, which has outstanding power for detecting departures from normality. Experiments using low-dimensional models demonstrate that the approach has a significant value, especially for nonhomogeneous observation networks and unknown model process errors. Moreover, hybrid factors are extended to consider marginals of more than one collocated variables using a test for multivariate normality. Findings from this study motivate the use of the proposed method for geophysical problems characterized by diverse observation networks and various dynamic instabilities, such as numerical weather prediction models. Significance Statement Data assimilation statistically processes observation errors and model forecast errors to provide optimal initial conditions for the forecast, playing a critical role in numerical weather forecasting. The ensemble Kalman filter, which has been widely adopted and developed in many operational centers, assumes Gaussianity of the prior distribution and solves a linear system of equations, leading to bias in strong nonlinear regimes. On the other hand, particle filters avoid many of those assumptions but are sensitive to sampling errors and are computationally expensive. We propose an adaptive hybrid strategy that combines their advantages and minimizes the disadvantages of the two methods. The hybrid particle filter–ensemble Kalman filter is achieved with the Shapiro–Wilk test to detect the Gaussianity of the ensemble members and determine the timing of the transition between these filter updates. Demonstrations in this study show that the proposed method is advantageous when observations are heterogeneous and when the model has an unknown bias. Furthermore, by extending the statistical hypothesis test to the test for multivariate normality, we consider marginals of more than one collocated variable. These results encourage further testing for real geophysical problems characterized by various dynamic instabilities, such as real numerical weather prediction models. 
    more » « less