Ramanujan’s partition congruences modulo
The free multiplicative Brownian motion
 Award ID(s):
 2055340
 NSFPAR ID:
 10372851
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Probability Theory and Related Fields
 Volume:
 184
 Issue:
 12
 ISSN:
 01788051
 Page Range / eLocation ID:
 p. 209273
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract assert that$$\ell \in \{5, 7, 11\}$$ $\ell \in \{5,7,11\}$ where$$\begin{aligned} p(\ell n+\delta _{\ell })\equiv 0\pmod {\ell }, \end{aligned}$$ $\begin{array}{c}p(\ell n+{\delta}_{\ell})\equiv 0\phantom{\rule{0ex}{0ex}}(mod\phantom{\rule{0ex}{0ex}}\ell ),\end{array}$ satisfies$$0<\delta _{\ell }<\ell $$ $0<{\delta}_{\ell}<\ell $ By proving Subbarao’s conjecture, Radu showed that there are no such congruences when it comes to parity. There are infinitely many odd (resp. even) partition numbers in every arithmetic progression. For primes$$24\delta _{\ell }\equiv 1\pmod {\ell }.$$ $24{\delta}_{\ell}\equiv 1\phantom{\rule{0ex}{0ex}}(mod\phantom{\rule{0ex}{0ex}}\ell ).$ we give a new proof of the conclusion that there are infinitely many$$\ell \ge 5,$$ $\ell \ge 5,$m for which is odd. This proof uses a generalization, due to the second author and Ramsey, of a result of Mazur in his classic paper on the Eisenstein ideal. We also refine a classical criterion of Sturm for modular form congruences, which allows us to show that the smallest such$$p(\ell m+\delta _{\ell })$$ $p(\ell m+{\delta}_{\ell})$m satisfies representing a significant improvement to the previous bound.$$m<(\ell ^21)/24,$$ $m<({\ell}^{2}1)/24,$ 
Abstract The numerical analysis of stochastic parabolic partial differential equations of the form
is surveyed, where$$\begin{aligned} du + A(u)\, dt = f \,dt + g \, dW, \end{aligned}$$ $\begin{array}{c}du+A\left(u\right)\phantom{\rule{0ex}{0ex}}dt=f\phantom{\rule{0ex}{0ex}}dt+g\phantom{\rule{0ex}{0ex}}dW,\end{array}$A is a nonlinear partial operator andW a Brownian motion. This manuscript unifies much of the theory developed over the last decade into a cohesive framework which integrates techniques for the approximation of deterministic partial differential equations with methods for the approximation of stochastic ordinary differential equations. The manuscript is intended to be accessible to audiences versed in either of these disciplines, and examples are presented to illustrate the applicability of the theory. 
Abstract Motivated by the RudnickSarnak theorem we study limiting distribution of smoothed local correlations of the form
for the Circular United Ensemble of random matrices for sufficiently smooth test functions.$$\begin{aligned} \sum _{j_1, j_2, \ldots , j_n} f(N(\theta _{j_2}\theta _{j_1}), N(\theta _{j_3}\theta _{j_1}), \ldots , N(\theta _{j_n}\theta _{j_1})) \end{aligned}$$ $\begin{array}{c}\sum _{{j}_{1},{j}_{2},\dots ,{j}_{n}}f(N({\theta}_{{j}_{2}}{\theta}_{{j}_{1}}),N({\theta}_{{j}_{3}}{\theta}_{{j}_{1}}),\dots ,N({\theta}_{{j}_{n}}{\theta}_{{j}_{1}}))\end{array}$ 
Abstract Let
denote the standard Haar system on [0, 1], indexed by$$(h_I)$$ $\left({h}_{I}\right)$ , the set of dyadic intervals and$$I\in \mathcal {D}$$ $I\in D$ denote the tensor product$$h_I\otimes h_J$$ ${h}_{I}\otimes {h}_{J}$ ,$$(s,t)\mapsto h_I(s) h_J(t)$$ $(s,t)\mapsto {h}_{I}\left(s\right){h}_{J}\left(t\right)$ . We consider a class of twoparameter function spaces which are completions of the linear span$$I,J\in \mathcal {D}$$ $I,J\in D$ of$$\mathcal {V}(\delta ^2)$$ $V\left({\delta}^{2}\right)$ ,$$h_I\otimes h_J$$ ${h}_{I}\otimes {h}_{J}$ . This class contains all the spaces of the form$$I,J\in \mathcal {D}$$ $I,J\in D$X (Y ), whereX andY are either the Lebesgue spaces or the Hardy spaces$$L^p[0,1]$$ ${L}^{p}[0,1]$ ,$$H^p[0,1]$$ ${H}^{p}[0,1]$ . We say that$$1\le p < \infty $$ $1\le p<\infty $ is a Haar multiplier if$$D:X(Y)\rightarrow X(Y)$$ $D:X\left(Y\right)\to X\left(Y\right)$ , where$$D(h_I\otimes h_J) = d_{I,J} h_I\otimes h_J$$ $D({h}_{I}\otimes {h}_{J})={d}_{I,J}{h}_{I}\otimes {h}_{J}$ , and ask which more elementary operators factor through$$d_{I,J}\in \mathbb {R}$$ ${d}_{I,J}\in R$D . A decisive role is played by theCapon projection given by$$\mathcal {C}:\mathcal {V}(\delta ^2)\rightarrow \mathcal {V}(\delta ^2)$$ $C:V\left({\delta}^{2}\right)\to V\left({\delta}^{2}\right)$ if$$\mathcal {C} h_I\otimes h_J = h_I\otimes h_J$$ $C{h}_{I}\otimes {h}_{J}={h}_{I}\otimes {h}_{J}$ , and$$I\le J$$ $\leftI\right\le \leftJ\right$ if$$\mathcal {C} h_I\otimes h_J = 0$$ $C{h}_{I}\otimes {h}_{J}=0$ , as our main result highlights: Given any bounded Haar multiplier$$I > J$$ $\leftI\right>\leftJ\right$ , there exist$$D:X(Y)\rightarrow X(Y)$$ $D:X\left(Y\right)\to X\left(Y\right)$ such that$$\lambda ,\mu \in \mathbb {R}$$ $\lambda ,\mu \in R$ i.e., for all$$\begin{aligned} \lambda \mathcal {C} + \mu ({{\,\textrm{Id}\,}}\mathcal {C})\text { approximately 1projectionally factors through }D, \end{aligned}$$ $\begin{array}{c}\lambda C+\mu (\phantom{\rule{0ex}{0ex}}\text{Id}\phantom{\rule{0ex}{0ex}}C)\phantom{\rule{0ex}{0ex}}\text{approximately 1projectionally factors through}\phantom{\rule{0ex}{0ex}}D,\end{array}$ , there exist bounded operators$$\eta > 0$$ $\eta >0$A ,B so thatAB is the identity operator ,$${{\,\textrm{Id}\,}}$$ $\phantom{\rule{0ex}{0ex}}\text{Id}\phantom{\rule{0ex}{0ex}}$ and$$\Vert A\Vert \cdot \Vert B\Vert = 1$$ $\Vert A\Vert \xb7\Vert B\Vert =1$ . Additionally, if$$\Vert \lambda \mathcal {C} + \mu ({{\,\textrm{Id}\,}}\mathcal {C})  ADB\Vert < \eta $$ $\Vert \lambda C+\mu (\phantom{\rule{0ex}{0ex}}\text{Id}\phantom{\rule{0ex}{0ex}}C)ADB\Vert <\eta $ is unbounded on$$\mathcal {C}$$ $C$X (Y ), then and then$$\lambda = \mu $$ $\lambda =\mu $ either factors through$${{\,\textrm{Id}\,}}$$ $\phantom{\rule{0ex}{0ex}}\text{Id}\phantom{\rule{0ex}{0ex}}$D or .$${{\,\textrm{Id}\,}}D$$ $\phantom{\rule{0ex}{0ex}}\text{Id}\phantom{\rule{0ex}{0ex}}D$ 
Abstract The softmax policy gradient (PG) method, which performs gradient ascent under softmax policy parameterization, is arguably one of the de facto implementations of policy optimization in modern reinforcement learning. For
discounted infinitehorizon tabular Markov decision processes (MDPs), remarkable progress has recently been achieved towards establishing global convergence of softmax PG methods in finding a nearoptimal policy. However, prior results fall short of delineating clear dependencies of convergence rates on salient parameters such as the cardinality of the state space$$\gamma $$ $\gamma $ and the effective horizon$${\mathcal {S}}$$ $S$ , both of which could be excessively large. In this paper, we deliver a pessimistic message regarding the iteration complexity of softmax PG methods, despite assuming access to exact gradient computation. Specifically, we demonstrate that the softmax PG method with stepsize$$\frac{1}{1\gamma }$$ $\frac{1}{1\gamma}$ can take$$\eta $$ $\eta $ to converge, even in the presence of a benign policy initialization and an initial state distribution amenable to exploration (so that the distribution mismatch coefficient is not exceedingly large). This is accomplished by characterizing the algorithmic dynamics over a carefullyconstructed MDP containing only three actions. Our exponential lower bound hints at the necessity of carefully adjusting update rules or enforcing proper regularization in accelerating PG methods.$$\begin{aligned} \frac{1}{\eta } {\mathcal {S}}^{2^{\Omega \big (\frac{1}{1\gamma }\big )}} ~\text {iterations} \end{aligned}$$ $\begin{array}{c}\frac{1}{\eta}{\leftS\right}^{{2}^{\Omega (\frac{1}{1\gamma})}}\phantom{\rule{0ex}{0ex}}\text{iterations}\end{array}$