Abstract In virtual reality (VR), established perception–action relationships break down because of conflicting and ambiguous sensorimotor inputs, inducing walking velocity underestimations. Here, we explore the effects of realigning perceptual sensory experiences with physical movements via augmented feedback on the estimation of virtual speed. We hypothesized that providing feedback about speed would lead to concurrent perceptual improvements and that these alterations would persist once the speedometer was removed. Ten young adults used immersive VR to view a virtual hallway translating at a series of fixed speeds. Participants were tasked with matching their walking speed on a self-paced treadmill to the optic flow in the environment. Information regarding walking speed accuracy was provided during augmented feedback trials via a real-time speedometer. We measured resulting walking velocity errors, as well as kinematic gait parameters. We found that the concordance between the virtual environment and gait speeds was higher when augmented feedback was provided during the trial. Furthermore, we observed retention effects beyond the intervention period via demonstrated smaller errors in speed perception accuracy and stronger concordance between perceived and actual speeds. Together, these results highlight a potential role for augmented feedback in guiding gait strategies that deviate away from predefined internal models of locomotion.
more »
« less
An Exploration of Parameter Duality in Statistical Inference
Abstract Well-known debates among statistical inferential paradigms emerge from conflicting views on the notion of probability. One dominant view understands probability as a representation of sampling variability; another prominent view understands probability as a measure of belief. The former generally describes model parameters as fixed values, in contrast to the latter. We propose that there are actually two versions of a parameter within both paradigms: a fixed unknown value that generated the data and a random version to describe the uncertainty in estimating the unknown value. An inferential approach based on CDs deciphers seemingly conflicting perspectives on parameters and probabilities.
more »
« less
- PAR ID:
- 10510721
- Publisher / Repository:
- Cambridge University Press
- Date Published:
- Journal Name:
- Philosophy of Science
- ISSN:
- 0031-8248
- Page Range / eLocation ID:
- 1 to 10
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)This paper investigates the use of model-free reinforcement learning to compute the optimal value in two-player stochastic games with parity objectives. In this setting, two decision makers, player Min and player Max, compete on a finite game arena - a stochastic game graph with unknown but fixed probability distributions - to minimize and maximize, respectively, the probability of satisfying a parity objective. We give a reduction from stochastic parity games to a family of stochastic reachability games with a parameter ε, such that the value of a stochastic parity game equals the limit of the values of the corresponding simple stochastic games as the parameter ε tends to 0. Since this reduction does not require the knowledge of the probabilistic transition structure of the underlying game arena, model-free reinforcement learning algorithms, such as minimax Q-learning, can be used to approximate the value and mutual best-response strategies for both players in the underlying stochastic parity game. We also present a streamlined reduction from 1 1/2-player parity games to reachability games that avoids recourse to nondeterminism. Finally, we report on the experimental evaluations of both reductionsmore » « less
-
The Bayesian HDI+ROPE decision rule is an increasingly common approach to testing null parameter values. The decision procedure involves a comparison between a posterior highest density interval (HDI) and a pre-specified region of practical equivalence (ROPE). One then accepts or rejects the null parameter value depending on the overlap (or lack thereof) between these intervals. Here we demonstrate, both theoretically and through examples, that this procedure is logically incoherent. Because the HDI is not transformation invariant, the ultimate inferential decision depends on statistically arbitrary and scientifically irrelevant properties of the statistical model. The incoherence arises from a common confusion between probability density and probability proper. The HDI+ROPE procedure relies on characterizing posterior densities as opposed to being based directly on probability. We conclude with recommendations for alternative Bayesian testing procedures that do not exhibit this pathology and provide a "quick fix" in the form of quantile intervals.more » « less
-
Abstract Quantum state discrimination is a central problem in quantum measurement theory, with applications spanning from quantum communication to computation. Typical measurement paradigms for state discrimination involve a minimum probability of error or unambiguous discrimination with a minimum probability of inconclusive results. Alternatively, an optimal inconclusive measurement, a non-projective measurement, achieves minimal error for a given inconclusive probability. This more general measurement encompasses the standard measurement paradigms for state discrimination and provides a much more powerful tool for quantum information and communication. Here, we experimentally demonstrate the optimal inconclusive measurement for the discrimination of binary coherent states using linear optics and single-photon detection. Our demonstration uses coherent displacement operations based on interference, single-photon detection, and fast feedback to prepare the optimal feedback policy for the optimal non-projective quantum measurement with high fidelity. This generalized measurement allows us to transition among standard measurement paradigms in an optimal way from minimum error to unambiguous measurements for binary coherent states. As a particular case, we use this general measurement to implement the optimal minimum error measurement for phase-coherent states, which is the optimal modulation for communications under the average power constraint. Moreover, we propose a hybrid measurement that leverages the binary optimal inconclusive measurement in conjunction with sequential, unambiguous state elimination to realize higher dimensional inconclusive measurements of coherent states.more » « less
-
Estimating and quantifying uncertainty in unknown system parameters from limited data remains a challenging inverse problem in a variety of real-world applications. While many approaches focus on estimating constant parameters, a subset of these problems includes time-varying parameters with unknown evolution models that often cannot be directly observed. This work develops a systematic particle filtering approach that reframes the idea behind artificial parameter evolution to estimate time-varying parameters in nonstationary inverse problems arising from deterministic dynamical systems. Focusing on systems modeled by ordinary differential equations, we present two particle filter algorithms for time-varying parameter estimation: one that relies on a fixed value for the noise variance of a parameter random walk; another that employs online estimation of the parameter evolution noise variance along with the time-varying parameter of interest. Several computed examples demonstrate the capability of the proposed algorithms in estimating time-varying parameters with different underlying functional forms and different relationships with the system states (i.e. additive vs. multiplicative).more » « less