skip to main content


Title: Extreme hitting probabilities for diffusion*
Abstract A variety of systems in physics, chemistry, biology, and psychology are modeled in terms of diffusing ‘searchers’ looking for ‘targets’. Examples range from gene regulation, to cell sensing, to human decision-making. A commonly studied statistic in these models is the so-called hitting probability for each target, which is the probability that a given single searcher finds that particular target. However, the decisive event in many systems is not the arrival of a given single searcher to a target, but rather the arrival of the fastest searcher to a target out of many searchers. In this paper, we study the probability that the fastest diffusive searcher hits a given target in the many searcher limit, which we call the extreme hitting probability. We first prove an upper bound for the decay of the probability that the searcher finds a target other than the closest target. This upper bound applies in very general settings and depends only on the relative distances to the targets. Furthermore, we find the exact asymptotics of the extreme hitting probabilities in terms of the short-time distribution of when a single searcher hits a target. These results show that the fastest searcher always hits the closest target in the many searcher limit. While this fact is intuitive in light of recent results on the time it takes the fastest searcher to find a target, our results give rigorous, quantitative estimates for the extreme hitting probabilities. We illustrate our results in several examples and numerical solutions.  more » « less
Award ID(s):
1944574 1814832
NSF-PAR ID:
10421754
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of Physics A: Mathematical and Theoretical
Volume:
55
Issue:
34
ISSN:
1751-8113
Page Range / eLocation ID:
345002
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A reversible diffusion process is initialized at position x0 and run until it first hits any of several targets. What is the probability that it terminates at a particular target? We propose a computationally efficient approach for estimating this probability, focused on those situations in which it takes a long time to hit any target. In these cases, direct simulation of the hitting probabilities becomes prohibitively expensive. On the other hand, if the timescales are sufficiently long, then the system will essentially “forget” its initial condition before it encounters a target. In these cases the hitting probabilities can be accurately approximated using only local simulations around each target, obviating the need for direct simulations. In empirical tests, we find that these local estimates can be computed in the same time it would take to compute a single direct simulation, but that they achieve an accuracy that would require thousands of direct simulation runs. 
    more » « less
  2. Dynamic network topology can pose important challenges to communication and control protocols in networks of autonomous vehicles. For instance, maintaining connectivity is a key challenge in unmanned aerial vehicle (UAV) networks. However, tracking and computational resources of the observer module might not be sufficient for constant monitoring of all surrounding nodes in large-scale networks. In this paper, we propose an optimal measurement policy for network topology monitoring under constrained resources. To this end, We formulate the localization of multiple objects in terms of linear networked systems and solve it using Kalman filtering with intermittent observation. The proposed policy includes two sequential steps. We first find optimal measurement attempt probabilities for each target using numerical optimization methods to assign the limited number of resources among targets. The optimal resource allocation follows a waterfall-like solution to assign more resources to targets with lower measurement success probability. This provides a 10% to 60% gain in prediction accuracy. The second step is finding optimal on-off patterns for measurement attempts for each target over time. We show that a regular measurement pattern that evenly distributed resources over time outperforms the two extreme cases of using all measurement resources either in the beginning or at the end of the measurement cycle. Our proof is based on characterizing the fixed-point solution of the error covariance matrix for regular patterns. Extensive simulation results confirm the optimality of the most alternating pattern with up to 10-fold prediction improvement for different scenarios. These two guidelines define a general policy for target tracking under constrained resources with applications to network topology prediction of autonomous systems 
    more » « less
  3. Summary

    The upper bounds on the coverage probabilities of the confidence regions based on blockwise empirical likelihood and non-standard expansive empirical likelihood methods for time series data are investigated via studying the probability of violating the convex hull constraint. The large sample bounds are derived on the basis of the pivotal limit of the blockwise empirical log-likelihood ratio obtained under fixed b asymptotics, which has recently been shown to provide a more accurate approximation to the finite sample distribution than the conventional χ2-approximation. Our theoretical and numerical findings suggest that both the finite sample and the large sample upper bounds for coverage probabilities are strictly less than 1 and the blockwise empirical likelihood confidence region can exhibit serious undercoverage when the dimension of moment conditions is moderate or large, the time series dependence is positively strong or the block size is large relative to the sample size. A similar finite sample coverage problem occurs for non-standard expansive empirical likelihood. To alleviate the coverage bound problem, we propose to penalize both empirical likelihood methods by relaxing the convex hull constraint. Numerical simulations and data illustrations demonstrate the effectiveness of our proposed remedies in terms of delivering confidence sets with more accurate coverage. Some technical details and additional simulation results are included in on-line supplemental material.

     
    more » « less
  4. The ever-increasing complexity of numerical models and associated computational demands have challenged classical reliability analysis methods. Surrogate model-based reliability analysis techniques, and in particular those using kriging meta-model, have gained considerable attention recently for their ability to achieve high accuracy and computational efficiency. However, existing stopping criteria, which are used to terminate the training of surrogate models, do not directly relate to the error in estimated failure probabilities. This limitation can lead to high computational demands because of unnecessary calls to costly performance functions (e.g., involving finite element models) or potentially inaccurate estimates of failure probability due to premature termination of the training process. Here, we propose the error-based stopping criterion (ESC) to address these limitations. First, it is shown that the total number of wrong sign estimation of the performance function for candidate design samples by kriging, S, follows a Poisson binomial distribution. This finding is subsequently used to estimate the lower and upper bounds of S for a given confidence level for sets of candidate design samples classified by kriging as safe and unsafe. An upper bound of error of the estimated failure probability is subsequently derived according to the probabilistic properties of Poisson binomial distribution. The proposed upper bound is implemented in the kriging-based reliability analysis method as the stopping criterion. The efficiency and robustness of ESC are investigated here using five benchmark reliability analysis problems. Results indicate that the proposed method achieves the set accuracy target and substantially reduces the computational demand, in some cases by over 50%. 
    more » « less
  5. A bstract We consider a Hayden & Preskill like setup for both maximally chaotic and sub-maximally chaotic quantum field theories. We act on the vacuum with an operator in a Rindler like wedge R and transfer a small subregion I of R to the other wedge. The chaotic scrambling dynamics of the QFT Rindler time evolution reveals the information in the other wedge. The holographic dual of this process involves a particle excitation falling into the bulk and crossing into the entanglement wedge of the complement to r = R\I . With the goal of studying the locality of the emergent holographic theory we compute various quantum information measures on the boundary that tell us when the particle has entered this entanglement wedge. In a maximally chaotic theory, these measures indicate a sharp transition where the particle enters the wedge exactly when the insertion is null separated from the quantum extremal surface for r . For sub-maximally chaotic theories, we find a smoothed crossover at a delayed time given in terms of the smaller Lyapunov exponent and dependent on the time-smearing scale of the probe excitation. The information quantities that we consider include the full vacuum modular energy R\I as well as the fidelity between the state with the particle and the state without. Along the way, we find a new explicit formula for the modular Hamiltonian of two intervals in an arbitrary 1+1 dimensional CFT to leading order in the small cross ratio limit. We also give an explicit calculation of the Regge limit of the modular flowed chaos correlator and find examples which do not saturate the modular chaos bound. Finally, we discuss the extent to which our results reveal properties of the target of the probe excitation as a “stringy quantum extremal surface” or simply quantify the probe itself thus giving a new approach to studying the notion of longitudinal string spreading. 
    more » « less