skip to main content

This content will become publicly available on December 14, 2022

Title: Peak Estimation for Uncertain and Switched Systems
Peak estimation bounds extreme values of a function of state along trajectories of a dynamical system. This paper focuses on extending peak estimation to continuous and discrete settings with time-independent and time-dependent uncertainty. Techniques from optimal control are used to incorporate uncertainty into an existing occupation measure-based peak estimation framework, which includes special consideration for handling switching-type (polytopic) uncertainties. The resulting infinite-dimensional linear programs can be solved approximately with Linear Matrix Inequalities arising from the moment-SOS hierarchy.
Authors:
; ; ;
Award ID(s):
1808381 1646121 2038493
Publication Date:
NSF-PAR ID:
10349423
Journal Name:
60th IEEE Conf. Decision and Control
Page Range or eLocation-ID:
3222 to 3228
Sponsoring Org:
National Science Foundation
More Like this
  1. In this thesis we propose novel estimation techniques for localization and planning problems, which are key challenges in long-term autonomy. We take inspiration in our methods from non-parametric estimation and use tools such as kernel density estimation, non-linear least-squares optimization, binary masking, and random sampling. We show that these methods, by avoiding explicit parametric models, outperform existing methods that use them. Despite the seeming differences between localization and planning, we demonstrate in this thesis that the problems share core structural similarities. When real or simulation-sampled measurements are expensive, noisy, or high variance, non-parametric estimation techniques give higher-quality results in less time. We first address two localization problems. In order to permit localization with a set of ad hoc-placed radios, we propose an ultra-wideband (UWB) graph realization system to localize the radios. Our system achieves high accuracy and robustness by using kernel density estimation for measurement probability densities, by explicitly modeling antenna delays, and by optimizing this combination with a non-linear least squares formulation. Next, in order to then support robotic navigation, we present a flexible system for simultaneous localization and mapping (SLAM) that combines elements from both traditional dense metric SLAM and topological SLAM, using a binary "masking function" tomore »focus attention. This masking function controls which lidar scans are available for loop closures. We provide several masking functions based on approximate topological class detectors. We then examine planning problems in the final chapter and in the appendix. In order to plan with uncertainty around multiple dynamic agents, we describe Monte-Carlo Policy-Tree Decision Making (MCPTDM), a framework for efficiently computing policies in partially-observable, stochastic, continuous problems. MCPTDM composes a sequence of simpler closed-loop policies and uses marginal action costs and particle repetition to improve cost estimates and sample efficiency by reducing variance. Finally, in the appendix we explore Learned Similarity Monte-Carlo Planning (LSMCP), where we seek to enhance the sample efficiency of partially observable Monte Carlo tree search-based planning by taking advantage of similarities in the final outcomes of similar states and actions. We train a multilayer perceptron to learn a similarity function which we then use to enhance value estimates in the planning. Collectively, we show in this thesis that non-parametric methods promote long-term autonomy by reducing error and increasing robustness across multiple domains.« less
  2. ABSTRACT

    Galaxy cluster masses, rich with cosmological information, can be estimated from internal dark matter (DM) velocity dispersions, which in turn can be observationally inferred from satellite galaxy velocities. However, galaxies are biased tracers of the DM, and the bias can vary over host halo and galaxy properties as well as time. We precisely calibrate the velocity bias, bv – defined as the ratio of galaxy and DM velocity dispersions – as a function of redshift, host halo mass, and galaxy stellar mass threshold ($M_{\rm \star , sat}$), for massive haloes ($M_{\rm 200c}\gt 10^{13.5} \, {\rm M}_\odot$) from five cosmological simulations: IllustrisTNG, Magneticum, Bahamas + Macsis, The Three Hundred Project, and MultiDark Planck-2. We first compare scaling relations for galaxy and DM velocity dispersion across simulations; the former is estimated using a new ensemble velocity likelihood method that is unbiased for low galaxy counts per halo, while the latter uses a local linear regression. The simulations show consistent trends of bv increasing with M200c and decreasing with redshift and $M_{\rm \star , sat}$. The ensemble-estimated theoretical uncertainty in bv is 2–3 per cent, but becomes percent-level when considering only the three highest resolution simulations. We update the mass–richness normalization for an SDSSmore »redMaPPer cluster sample, and find our improved bv estimates reduce the normalization uncertainty from 22 to 8 per cent, demonstrating that dynamical mass estimation is competitive with weak lensing mass estimation. We discuss necessary steps for further improving this precision. Our estimates for $b_v(M_{\rm 200c}, M_{\rm \star , sat}, z)$ are made publicly available.

    « less
  3. Simultaneous real-time monitoring of measurement and parameter gross errors poses a great challenge to distribution system state estimation due to usually low measurement redundancy. This paper presents a gross error analysis framework, employing μPMUs to decouple the error analysis of measurements and parameters. When a recent measurement scan from SCADA RTUs and smart meters is available, gross error analysis of measurements is performed as a post-processing step of non-linear DSSE (NLSE). In between scans of SCADA and AMI measurements, a linear state estimator (LSE) using μPMU measurements and linearized SCADA and AMI measurements is used to detect parameter data changes caused by the operation of Volt/Var controls. For every execution of the LSE, the variance of the unsynchronized measurements is updated according to the uncertainty introduced by load dynamics, which are modeled as an Ornstein–Uhlenbeck random process. The update of variance of unsynchronized measurements can avoid the wrong detection of errors and can model the trustworthiness of outdated or obsolete data. When new SCADA and AMI measurements arrive, the LSE provides added redundancy to the NLSE through synthetic measurements. The presented framework was tested on a 13-bus test system. Test results highlight that the LSE and NLSE processes successfully workmore »together to analyze bad data for both measurements and parameters.« less
  4. We propose a learning-based robust predictive control algorithm that compensates for significant uncertainty in the dynamics for a class of discrete-time systems that are nominally linear with an additive nonlinear component. Such systems commonly model the nonlinear effects of an unknown environment on a nominal system. We optimize over a class of nonlinear feedback policies inspired by certainty equivalent "estimate-and-cancel" control laws pioneered in classical adaptive control to achieve significant performance improvements in the presence of uncertainties of large magnitude, a setting in which existing learning-based predictive control algorithms often struggle to guarantee safety. In contrast to previous work in robust adaptive MPC, our approach allows us to take advantage of structure (i.e., the numerical predictions) in the a priori unknown dynamics learned online through function approximation. Our approach also extends typical nonlinear adaptive control methods to systems with state and input constraints even when we cannot directly cancel the additive uncertain function from the dynamics. Moreover, we apply contemporary statistical estimation techniques to certify the system’s safety through persistent constraint satisfaction with high probability. Finally, we show in simulation that our method can accommodate more significant unknown dynamics terms than existing methods.
  5. Accurate estimation of forest biomass is important for scientists and policymakers interested in carbon accounting, nutrient cycling, and forest resilience. Estimates often rely on the allometry of trees; however, limited datasets, uncertainty in model form, and unaccounted for sources of variation warrant a re-examination of allometric relationships using modern statistical techniques. We asked the following questions: (1) Is there among-stand variation in allometric relationships? (2) Is there nonlinearity in allometric relationships? (3) Can among-stand variation or nonlinearities in allometric equations be attributed to differences in stand age? (4) What are the implications for biomass estimation? To answer these questions, we synthesized a dataset of small trees from six different studies in the White Mountains of New Hampshire. We compared the performance of generalized additive models (GAMs) and linear models and found that GAMs consistently outperform linear models. The best-fitting model indicates that allometries vary among both stands and species and contain subtle nonlinearities which are themselves variable by species. Using a planned contrasts analysis, we were able to attribute some of the observed among-stand heterogeneity to differences in stand age. However, variability in these results point to additional sources of stand-level heterogeneity, which if identified could improve the accuracy ofmore »live-tree biomass estimation.« less