skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: What is the Lagrangian for Nonlinear Filtering?
Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of Kalman-Bucy to the nonlinear filter: The state evolves as a continuous-time Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of Kalman-Bucy is shown to be a special case.  more » « less
Award ID(s):
1761622
PAR ID:
10176463
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2019 IEEE 58th Conference on Decision and Control (CDC)
Page Range / eLocation ID:
1607 to 1614
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the scenario in which a continuous-time Gauss-Markov process is estimated by the Kalman-Bucy filter over a Gaussian channel (sensor) with a variable sensor gain. The problem of scheduling the sensor gain over a finite time interval to minimize the weighted sum of the data rate (the mutual information between the sensor output and the underlying Gauss-Markov process) and the distortion (the mean-square estimation error) is formulated as an optimal control problem. A necessary optimality condition for a scheduled sensor gain is derived based on Pontryagin’s minimum principle. For a scalar problem, we show that an optimal sensor gain control is of bang-bang type, except the possibility of taking an intermediate value when there exists a stationary point on the switching surface in the phase space of canonical dynamics. Furthermore, we show that the number of switches is at most two and the time instants at which the optimal gain must be switched can be computed from the analytical solutions to the canonical equations. 
    more » « less
  2. Duality of control and estimation allows mapping recent advances in data-guided control to the estimation setup. This paper formalizes and utilizes such a mapping to consider learning the optimal (steady-state) Kalman gain when process and measurement noise statistics are unknown. Specifically, building on the duality between synthesizing optimal control and estimation gains, the filter design problem is formalized as direct policy learning. In this direction, the duality is used to extend existing theoretical guarantees of direct policy updates for Linear Quadratic Regulator (LQR) to establish global convergence of the Gradient Descent (GD) algorithm for the estimation problem–while addressing subtle differences between the two synthesis problems. Subsequently, a Stochastic Gradient Descent (SGD) approach is adopted to learn the optimal Kalman gain without the knowledge of noise covariances. The results are illustrated via several numerical examples. 
    more » « less
  3. This paper investigates the problem of persistent monitoring, where a finite set of mobile agents persistently visits a finite set of targets in a multi-dimensional environment. The agents must estimate the targets’ internal states and the goal is to minimize the mean squared estimation error over time. The internal states of the targets evolve with linear stochastic dynamics and thus the optimal estimator is a Kalman-Bucy Filter. We constrain the trajectories of the agents to be periodic and represented by a truncated Fourier series. Taking advantage of the periodic nature of this solution, we define the infinite horizon version of the problem and explore the property that the mean estimation squared error converges to a limit cycle. We present a technique to compute online the gradient of the steady state mean estimation error of the targets’ states with respect to the parameters defining the trajectories and use a gradient descent scheme to obtain locally optimal movement schedules. This scheme allows us to address the infinite horizon problem with only a small number of parameters to be optimized. 
    more » « less
  4. This paper presents convergence analysis of a novel data-driven feedback control algorithm designed for generating online controls based on partial noisy observational data. The algorithm comprises a particle filter-enabled state estimation component, estimating the controlled system’s state via indirect observations, alongside an efficient stochastic maximum principle-type optimal control solver. By integrating weak convergence techniques for the particle filter with convergence analysis for the stochastic maximum principle control solver, we derive a weak convergence result for the optimization procedure in search of optimal data-driven feedback control. Numerical experiments are performed to validate the theoretical findings. 
    more » « less
  5. This paper contributes to the emerging viewpoint that governing equations for dynamic state estimation, conditioned on the history of noisy measurements, can be viewed as gradient flow on the manifold of joint probability density functions with respect to suitable metrics. Herein, we focus on the Wonham filter where the prior dynamics is given by a continuous time Markov chain on a finite state space; the measurement model includes noisy observation of the (possibly nonlinear function of) state. We establish that the posterior flow given by the Wonham filter can be viewed as the small time-step limit of proximal recursions of certain functionals on the probability simplex. The results of this paper extend our earlier work where similar proximal recursions were derived for the Kalman-Bucy filter. 
    more » « less