skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Hitting probabilities for fast stochastic search *
Abstract Many physical phenomena are modeled as stochastic searchers looking for targets. In these models, the probability that a searcher finds a particular target, its so-called hitting probability, is often of considerable interest. In this work we determine hitting probabilities for stochastic search processes conditioned on being faster than a random short time. Such times have been used to model stochastic resetting or stochastic inactivation. These results apply to any search process, diffusive or otherwise, whose unconditional short-time behavior can be adequately approximated, which we characterize for broad classes of stochastic search. We illustrate these results in several examples and show that the conditional hitting probabilities depend predominantly on the relative geodesic lengths between the initial position of the searcher and the targets. Finally, we apply these results to a canonical evidence accumulation model for decision making.  more » « less
Award ID(s):
2325258 1944574
PAR ID:
10630057
Author(s) / Creator(s):
;
Publisher / Repository:
IOP
Date Published:
Journal Name:
Journal of Physics A: Mathematical and Theoretical
Volume:
57
Issue:
30
ISSN:
1751-8113
Page Range / eLocation ID:
305003
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract A variety of systems in physics, chemistry, biology, and psychology are modeled in terms of diffusing ‘searchers’ looking for ‘targets’. Examples range from gene regulation, to cell sensing, to human decision-making. A commonly studied statistic in these models is the so-called hitting probability for each target, which is the probability that a given single searcher finds that particular target. However, the decisive event in many systems is not the arrival of a given single searcher to a target, but rather the arrival of the fastest searcher to a target out of many searchers. In this paper, we study the probability that the fastest diffusive searcher hits a given target in the many searcher limit, which we call the extreme hitting probability. We first prove an upper bound for the decay of the probability that the searcher finds a target other than the closest target. This upper bound applies in very general settings and depends only on the relative distances to the targets. Furthermore, we find the exact asymptotics of the extreme hitting probabilities in terms of the short-time distribution of when a single searcher hits a target. These results show that the fastest searcher always hits the closest target in the many searcher limit. While this fact is intuitive in light of recent results on the time it takes the fastest searcher to find a target, our results give rigorous, quantitative estimates for the extreme hitting probabilities. We illustrate our results in several examples and numerical solutions. 
    more » « less
  2. A reversible diffusion process is initialized at position x0 and run until it first hits any of several targets. What is the probability that it terminates at a particular target? We propose a computationally efficient approach for estimating this probability, focused on those situations in which it takes a long time to hit any target. In these cases, direct simulation of the hitting probabilities becomes prohibitively expensive. On the other hand, if the timescales are sufficiently long, then the system will essentially “forget” its initial condition before it encounters a target. In these cases the hitting probabilities can be accurately approximated using only local simulations around each target, obviating the need for direct simulations. In empirical tests, we find that these local estimates can be computed in the same time it would take to compute a single direct simulation, but that they achieve an accuracy that would require thousands of direct simulation runs. 
    more » « less
  3. We consider a certain class of nonlinear maps that preserve the probability simplex, i.e., stochastic maps, that are inspired by the DeGroot-Friedkin model of belief/opinion propagation over influence networks. The corresponding dynamical models describe the evolution of the probability distribution of interacting species. Such models where the probability transition mechanism depends nonlinearly on the current state are often referred to as nonlinear Markov chains. In this paper we develop stability results and study the behavior of representative opinion models. The stability certificates are based on the contractivity of the nonlinear evolution in the l1-metric. We apply the theory to two types of opinion models where the adaptation of the transition probabilities to the current state is exponential and linear, respectively–both of these can display a wide range of behaviors. We discuss continuous-time and other generalizations 
    more » « less
  4. Quantifying the impact of parametric and model-form uncertainty on the predictions of stochastic models is a key challenge in many applications. Previous work has shown that the relative entropy rate is an effective tool for deriving path-space uncertainty quantification (UQ) bounds on ergodic averages. In this work we identify appropriate information-theoretic objects for a wider range of quantities of interest on path-space, such as hitting times and exponentially discounted observables, and develop the corresponding UQ bounds. In addition, our method yields tighter UQ bounds, even in cases where previous relative-entropy-based methods also apply, e.g. , for ergodic averages. We illustrate these results with examples from option pricing, non-reversible diffusion processes, stochastic control, semi-Markov queueing models, and expectations and distributions of hitting times. 
    more » « less
  5. Cover times quantify the speed of exhaustive search. In this work, we approximate the moments of cover times of a wide range of stochastic search processes in d-dimensional continuous space and on an arbitrary discrete network under frequent stochastic resetting. These approximations apply to a large class of resetting time distributions and search processes including diffusion, run-and-tumble particles, and Markov jump processes. We illustrate these results in several examples; in the case of diffusive search, we show that the errors of our approximations vanish exponentially fast. Finally, we derive a criterion for when endowing a discrete state search process with minimal stochastic resetting reduces the mean cover time. 
    more » « less