skip to main content

Title: Three-dimensional shear driven turbulence with noise at the boundary
Abstract We consider the incompressible 3D Navier–Stokes equations subject to a shear induced by noisy movement of part of the boundary. The effect of the noise is quantified by upper bounds on the first two moments of the dissipation rate. The expected value estimate is consistent with the Kolmogorov dissipation law, recovering an upper bound as in (Doering and Constantin 1992 Phys. Rev. Lett. 69 1648) for the deterministic case. The movement of the boundary is given by an Ornstein–Uhlenbeck process; a potential for over-dissipation is noted if the Ornstein–Uhlenbeck process were replaced by the Wiener process.
Authors:
; ;
Award ID(s):
1855417
Publication Date:
NSF-PAR ID:
10329553
Journal Name:
Nonlinearity
Volume:
34
Issue:
7
Page Range or eLocation-ID:
4764 to 4786
ISSN:
0951-7715
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider particles obeying Langevin dynamics while being at known positions and having known velocities at the two end-points of a given interval. Their motion in phase space can be modeled as an Ornstein–Uhlenbeck process conditioned at the two end-points—a generalization of the Brownian bridge. Using standard ideas from stochastic optimal control we construct a stochastic differential equation (SDE) that generates such a bridge that agrees with the statistics of the conditioned process, as a degenerate diffusion. Higher order linear diffusions are also considered. In general, a time-varying drift is sufficient to modify the prior SDE and meet the end-point conditions. When the drift is obtained by solving a suitable differential Lyapunov equation, the SDE models correctly the statistics of the bridge. These types of models are relevant in controlling and modeling distribution of particles and the interpolation of density functions.
  2. The Bayesian formulation of inverse problems is attractive for three primary reasons: it provides a clear modelling framework; it allows for principled learning of hyperparameters; and it can provide uncertainty quantification. The posterior distribution may in principle be sampled by means of MCMC or SMC methods, but for many problems it is computationally infeasible to do so. In this situation maximum a posteriori (MAP) estimators are often sought. Whilst these are relatively cheap to compute, and have an attractive variational formulation, a key drawback is their lack of invariance under change of parameterization; it is important to study MAP estimators, however, because they provide a link with classical optimization approaches to inverse problems and the Bayesian link may be used to improve upon classical optimization approaches. The lack of invariance of MAP estimators under change of parameterization is a particularly significant issue when hierarchical priors are employed to learn hyperparameters. In this paper we study the effect of the choice of parameterization on MAP estimators when a conditionally Gaussian hierarchical prior distribution is employed. Specifically we consider the centred parameterization, the natural parameterization in which the unknown state is solved for directly, and the noncentred parameterization, which works with amore »whitened Gaussian as the unknown state variable, and arises naturally when considering dimension-robust MCMC algorithms; MAP estimation is well-defined in the nonparametric setting only for the noncentred parameterization. However, we show that MAP estimates based on the noncentred parameterization are not consistent as estimators of hyperparameters; conversely, we show that limits of finite-dimensional centred MAP estimators are consistent as the dimension tends to infinity. We also consider empirical Bayesian hyperparameter estimation, show consistency of these estimates, and demonstrate that they are more robust with respect to noise than centred MAP estimates. An underpinning concept throughout is that hyperparameters may only be recovered up to measure equivalence, a well-known phenomenon in the context of the Ornstein–Uhlenbeck process. The applicability of the results is demonstrated concretely with the study of hierarchical Whittle–Matérn and ARD priors.« less
  3. Simultaneous real-time monitoring of measurement and parameter gross errors poses a great challenge to distribution system state estimation due to usually low measurement redundancy. This paper presents a gross error analysis framework, employing μPMUs to decouple the error analysis of measurements and parameters. When a recent measurement scan from SCADA RTUs and smart meters is available, gross error analysis of measurements is performed as a post-processing step of non-linear DSSE (NLSE). In between scans of SCADA and AMI measurements, a linear state estimator (LSE) using μPMU measurements and linearized SCADA and AMI measurements is used to detect parameter data changes caused by the operation of Volt/Var controls. For every execution of the LSE, the variance of the unsynchronized measurements is updated according to the uncertainty introduced by load dynamics, which are modeled as an Ornstein–Uhlenbeck random process. The update of variance of unsynchronized measurements can avoid the wrong detection of errors and can model the trustworthiness of outdated or obsolete data. When new SCADA and AMI measurements arrive, the LSE provides added redundancy to the NLSE through synthetic measurements. The presented framework was tested on a 13-bus test system. Test results highlight that the LSE and NLSE processes successfully workmore »together to analyze bad data for both measurements and parameters.« less
  4. ABSTRACT Comparative phylogenetic studies of adaptation are uncommon in biomechanics and physiology. Such studies require data collection from many species, a challenge when this is experimentally intensive. Moreover, researchers struggle to employ the most biologically appropriate phylogenetic tools for identifying adaptive evolution. Here, we detail an established but greatly underutilized phylogenetic comparative framework – the Ornstein–Uhlenbeck process – that explicitly models long-term adaptation. We discuss challenges in implementing and interpreting the model, and we outline potential solutions. We demonstrate use of the model through studying the evolution of thermal physiology in treefrogs. Frogs of the family Hylidae have twice colonized the temperate zone from the tropics, and such colonization likely involved a fundamental change in physiology due to colder and more seasonal temperatures. However, which traits changed to allow colonization is unclear. We measured cold tolerance and characterized thermal performance curves in jumping for 12 species of treefrogs distributed from the Neotropics to temperate North America. We then conducted phylogenetic comparative analyses to examine how tolerances and performance curves evolved and to test whether that evolution was adaptive. We found that tolerance to low temperatures increased with the transition to the temperate zone. In contrast, jumping well at colder temperaturesmore »was unrelated to biogeography and thus did not adapt during dispersal. Overall, our study shows how comparative phylogenetic methods can be leveraged in biomechanics and physiology to test the evolutionary drivers of variation among species.« less
  5. In this paper, we propose a delta-hedging strategy for a long memory stochastic volatil- ity model (LMSV). This is a model in which the volatility is driven by a fractional Ornstein–Uhlenbeck process with long-memory parameter H. We compute the so- called hedging bias, i.e. the difference between the Black–Scholes Delta and the LMSV Delta as a function of H, and we determine when a European-type option is over-hedged or under-hedged.