skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Extreme hitting probabilities for diffusion*
Abstract A variety of systems in physics, chemistry, biology, and psychology are modeled in terms of diffusing ‘searchers’ looking for ‘targets’. Examples range from gene regulation, to cell sensing, to human decision-making. A commonly studied statistic in these models is the so-called hitting probability for each target, which is the probability that a given single searcher finds that particular target. However, the decisive event in many systems is not the arrival of a given single searcher to a target, but rather the arrival of the fastest searcher to a target out of many searchers. In this paper, we study the probability that the fastest diffusive searcher hits a given target in the many searcher limit, which we call the extreme hitting probability. We first prove an upper bound for the decay of the probability that the searcher finds a target other than the closest target. This upper bound applies in very general settings and depends only on the relative distances to the targets. Furthermore, we find the exact asymptotics of the extreme hitting probabilities in terms of the short-time distribution of when a single searcher hits a target. These results show that the fastest searcher always hits the closest target in the many searcher limit. While this fact is intuitive in light of recent results on the time it takes the fastest searcher to find a target, our results give rigorous, quantitative estimates for the extreme hitting probabilities. We illustrate our results in several examples and numerical solutions.  more » « less
Award ID(s):
1944574 1814832
PAR ID:
10421754
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Physics A: Mathematical and Theoretical
Volume:
55
Issue:
34
ISSN:
1751-8113
Page Range / eLocation ID:
345002
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Many physical phenomena are modeled as stochastic searchers looking for targets. In these models, the probability that a searcher finds a particular target, its so-called hitting probability, is often of considerable interest. In this work we determine hitting probabilities for stochastic search processes conditioned on being faster than a random short time. Such times have been used to model stochastic resetting or stochastic inactivation. These results apply to any search process, diffusive or otherwise, whose unconditional short-time behavior can be adequately approximated, which we characterize for broad classes of stochastic search. We illustrate these results in several examples and show that the conditional hitting probabilities depend predominantly on the relative geodesic lengths between the initial position of the searcher and the targets. Finally, we apply these results to a canonical evidence accumulation model for decision making. 
    more » « less
  2. A reversible diffusion process is initialized at position x0 and run until it first hits any of several targets. What is the probability that it terminates at a particular target? We propose a computationally efficient approach for estimating this probability, focused on those situations in which it takes a long time to hit any target. In these cases, direct simulation of the hitting probabilities becomes prohibitively expensive. On the other hand, if the timescales are sufficiently long, then the system will essentially “forget” its initial condition before it encounters a target. In these cases the hitting probabilities can be accurately approximated using only local simulations around each target, obviating the need for direct simulations. In empirical tests, we find that these local estimates can be computed in the same time it would take to compute a single direct simulation, but that they achieve an accuracy that would require thousands of direct simulation runs. 
    more » « less
  3. The ever-increasing complexity of numerical models and associated computational demands have challenged classical reliability analysis methods. Surrogate model-based reliability analysis techniques, and in particular those using kriging meta-model, have gained considerable attention recently for their ability to achieve high accuracy and computational efficiency. However, existing stopping criteria, which are used to terminate the training of surrogate models, do not directly relate to the error in estimated failure probabilities. This limitation can lead to high computational demands because of unnecessary calls to costly performance functions (e.g., involving finite element models) or potentially inaccurate estimates of failure probability due to premature termination of the training process. Here, we propose the error-based stopping criterion (ESC) to address these limitations. First, it is shown that the total number of wrong sign estimation of the performance function for candidate design samples by kriging, S, follows a Poisson binomial distribution. This finding is subsequently used to estimate the lower and upper bounds of S for a given confidence level for sets of candidate design samples classified by kriging as safe and unsafe. An upper bound of error of the estimated failure probability is subsequently derived according to the probabilistic properties of Poisson binomial distribution. The proposed upper bound is implemented in the kriging-based reliability analysis method as the stopping criterion. The efficiency and robustness of ESC are investigated here using five benchmark reliability analysis problems. Results indicate that the proposed method achieves the set accuracy target and substantially reduces the computational demand, in some cases by over 50%. 
    more » « less
  4. Contextual bandit is a classic multi-armed bandit setting, where side information (i.e., context) is available before arm selection. A standard assumption is that exact contexts are perfectly known prior to arm selection and only single feedback is returned. In this work, we focus on multi-feedback bandit learning with probabilistic contexts, where a bundle of contexts are revealed to the agent along with their corresponding probabilities at the beginning of each round. This models such scenarios as where contexts are drawn from the probability output of a neural network and the reward function is jointly determined by multiple feedback signals. We propose a kernelized learning algorithm based on upper confidence bound to choose the optimal arm in reproducing kernel Hilbert space for each context bundle. Moreover, we theoretically establish an upper bound on the cumulative regret with respect to an oracle that knows the optimal arm given probabilistic contexts, and show that the bound grows sublinearly with time. Our simula- tion on machine learning model recommendation further validates the sub-linearity of our cumulative regret and demonstrates that our algorithm outper- forms the approach that selects arms based on the most probable context. 
    more » « less
  5. Banerjee, Arindam; Fukumizu, Kenji (Ed.)
    Couplings play a central role in the analysis of Markov chain Monte Carlo algorithms and appear increasingly often in the algorithms themselves, e.g. in convergence diagnostics, parallelization, and variance reduction techniques. Existing couplings of the Metropolis-Hastings algorithm handle the proposal and acceptance steps separately and fall short of the upper bound on one-step meeting probabilities given by the coupling inequality. This paper introduces maximal couplings which achieve this bound while retaining the practical advantages of current methods. We consider the properties of these couplings and examine their behavior on a selection of numerical examples. 
    more » « less