We explore properties of the family sizes arising in a linear birth process with immigration (BI). In particular, we study the correlation of the number of families observed during consecutive disjoint intervals of time. Letting
Motivated by the challenge of sampling Gibbs measures with nonconvex potentials, we study a continuum birth–death dynamics. We improve results in previous works (Liu
- NSF-PAR ID:
- 10465343
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Nonlinearity
- Volume:
- 36
- Issue:
- 11
- ISSN:
- 0951-7715
- Page Range / eLocation ID:
- p. 5731-5772
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract S (a ,b ) be the number of families observed in (a ,b ), we study the expected sample variance and its asymptotics forp consecutive sequential samples , for$$S_p =(S(t_0,t_1),\dots , S(t_{p-1},t_p))$$ . By conditioning on the sizes of the samples, we provide a connection between$$0=t_0 and$$S_p$$ p sequential samples of sizes , drawn from a single run of a Chinese Restaurant Process. Properties of the latter were studied in da Silva et al. (Bernoulli 29:1166–1194, 2023.$$n_1,n_2,\dots ,n_p$$ https://doi.org/10.3150/22-BEJ1494 ). We show how the continuous-time framework helps to make asymptotic calculations easier than its discrete-time counterpart. As an application, for a specific choice of , where the lengths of intervals are logarithmically equal, we revisit Fisher’s 1943 multi-sampling problem and give another explanation of what Fisher’s model could have meant in the world of sequential samples drawn from a BI process.$$t_1,t_2,\dots , t_p$$ -
Abstract Quantum computing is a rapidly growing field with the potential to change how we solve previously intractable problems. Emerging hardware is approaching a complexity that requires increasingly sophisticated programming and control. Scaffold is an older quantum programming language that was originally designed for resource estimation for far-future, large quantum machines, and ScaffCC is the corresponding LLVM-based compiler. For the first time, we provide a full and complete overview of the language itself, the compiler as well as its pass structure. While previous works Abhari
et al (2015Parallel Comput. 45 2–17), Abhariet al (2012 Scaffold: quantum programming languagehttps://cs.princeton.edu/research/techreps/TR-934-12 ), have piecemeal descriptions of different portions of this toolchain, we provide a more full and complete description in this paper. We also introduce updates to ScaffCC including conditional measurement and multidimensional qubit arrays designed to keep in step with modern quantum assembly languages, as well as an alternate toolchain targeted at maintaining correctness and low resource count for noisy-intermediate scale quantum (NISQ) machines, and compatibility with current versions of LLVM and Clang. Our goal is to provide the research community with a functional LLVM framework for quantum program analysis, optimization, and generation of executable code. -
Abstract We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian, and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero
almost surely . Our algorithm and analysis further develop the prior work of Na et al. (Math Program, 2022.https://doi.org/10.1007/s10107-022-01846-z ). Specifically, we allow nonlinear inequality constraintswithout requiring the strict complementary condition; refine some of designs in Na et al. (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems. -
Abstract It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.
arXiv:2010.09793 ) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators associated to a domain$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla $$ with a uniformly rectifiable boundary$$\Omega \subset {\mathbb {R}}^n$$ of dimension$$\Gamma $$ , the now usual distance to the boundary$$d < n-1$$ given by$$D = D_\beta $$ for$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$ , where$$X \in \Omega $$ and$$\beta >0$$ . In this paper we show that the Green function$$\gamma \in (-1,1)$$ G for , with pole at infinity, is well approximated by multiples of$$L_{\beta ,\gamma }$$ , in the sense that the function$$D^{1-\gamma }$$ satisfies a Carleson measure estimate on$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$ . We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).$$\Omega $$ -
Abstract To quantitatively convert upper mantle seismic wave speeds measured into temperature, density, composition, and corresponding and uncertainty, we introduce the
W hole‐rockI nterpretativeS eismicT oolboxF orU ltramaficL ithologies (WISTFUL). WISTFUL is underpinned by a database of 4,485 ultramafic whole‐rock compositions, their calculated mineral modes, elastic moduli, and seismic wave speeds over a range of pressure (P ) and temperature (T ) (P = 0.5–6 GPa,T = 200–1,600°C) using the Gibbs free energy minimization routine Perple_X. These data are interpreted with a toolbox of MATLAB® functions, scripts, and three general user interfaces:WISTFUL_relations , which plots relationships between calculated parameters and/or composition;WISTFUL_geotherms , which calculates seismic wave speeds along geotherms; andWISTFUL_inversion , which inverts seismic wave speeds for best‐fit temperature, composition, and density. To evaluate our methodology and quantify the forward calculation error, we estimate two dominant sources of uncertainty: (a) the predicted mineral modes and compositions, and (b) the elastic properties and mixing equations. To constrain the first source of uncertainty, we compiled 122 well‐studied ultramafic xenoliths with known whole‐rock compositions, mineral modes, and estimatedP ‐T conditions. We compared the observed mineral modes with modes predicted using five different thermodynamic solid solution models. The Holland et al. (2018,https://doi.org/10.1093/petrology/egy048 ) solution models best reproduce phase assemblages (∼12 vol. % phase root‐mean‐square error [RMSE]) and estimated wave speeds. To assess the second source of uncertainty, we compared wave speed measurements of 40 ultramafic rocks with calculated wave speeds, finding excellent agreement (V pRMSE = 0.11 km/s). WISTFUL easily analyzes seismic datasets, integrates into modeling, and acts as an educational tool.