The degree of localization of the Harper-Hofstadter model is shown to display striking periodic dependence on phase degrees of freedom, which can depend on the nature of the boundary condition, reminiscent of the Aharonov-Bohm effect. In the context of implementation in a finite ring-shaped lattice structure, this phase dependence can be utilized as a fundamentally different principle for precision sensing of rotation and magnetic fields based on localization rather than on interferometry.
more »
« less
This content will become publicly available on February 5, 2026
Extremal principle for the Harper-Hofstadter model
We present an optimization principle for the Harper-Hofstadter model that naturally yields the critical value 𝜆=2 for the Harper parameter. We provide proofs for this principle and its corollaries. We demonstrate that it can be applied to a continuum model, where it can be used to find the physical parameters for criticality.
more »
« less
- Award ID(s):
- 2309025
- PAR ID:
- 10601194
- Publisher / Repository:
- American Physical Society
- Date Published:
- Journal Name:
- Physical Review B
- Volume:
- 111
- Issue:
- 7
- ISSN:
- 2469-9950
- Page Range / eLocation ID:
- 075405
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep neural networks are often seen as different from other model classes by defying conventional notions of generalization. Popular examples of anomalous generalization behaviour include benign overfitting, double descent, and the success of overparametrization. We argue that these phenomena are not distinct to neural networks, or particularly mysterious. Moreover, this generalization behaviour can be intuitively understood, and rigorously characterized using long-standing generalization frameworks such as PAC-Bayes and countable hypothesis bounds. We present soft inductive biases as a key unifying principle in explaining these phenomena: rather than restricting the hypothesis space to avoid overfitting, embrace a flexible hypothesis space, with a soft preference for simpler solutions that are consistent with the data. This principle can be encoded in many model classes, and thus deep learning is not as mysterious or different from other model classes as it might seem. However, we also highlight how deep learning is relatively distinct in other ways, such as its ability for representation learning, phenomena such as mode connectivity, and its relative universality.more » « less
-
In this paper we revive a special, less-common, variational principle in analytical mechanics (Hertz’ principle of least curvature) to develop a novel variational analogue of Euler's equations for the dynamics of an ideal fluid. The new variational formulation is fundamentally different from those formulations based on Hamilton's principle of least action. Using this new variational formulation, we generalize the century-old problem of the flow over a two-dimensional body; we developed a variational closure condition that is, unlike the Kutta condition, derived from first principles. The developed variational principle reduces to the classical Kutta–Zhukovsky condition in the special case of a sharp-edged airfoil, which challenges the accepted wisdom about the Kutta condition being a manifestation of viscous effects. Rather, we found that it represents conservation of momentum. Moreover, the developed variational principle provides, for the first time, a theoretical model for lift over smooth shapes without sharp edges where the Kutta condition is not applicable. We discuss how this fundamental divergence from current theory can explain discrepancies in computational studies and experiments with superfluids.more » « less
-
Abstract For $$V\sim \alpha \log \log T$$ with $$0<\alpha <2$$, we prove $$\begin{align*} & \frac{1}{T}\textrm{meas}\{t\in [T,2T]: \log|\zeta(1/2+ \textrm{i} t)|>V\}\ll \frac{1}{\sqrt{\log\log T}} e^{-V^{2}/\log\log T}. \end{align*}$$This improves prior results of Soundararajan and of Harper on the large deviations of Selberg’s Central Limit Theorem in that range, without the use of the Riemann hypothesis. The result implies the sharp upper bound for the fractional moments of the Riemann zeta function proved by Heap, Radziwiłł, and Soundararajan. It also shows a new upper bound for the maximum of the zeta function on short intervals of length $$(\log T)^{\theta }$$, $$0<\theta <3$$, that is expected to be sharp for $$\theta> 0$$. Finally, it yields a sharp upper bound (to order one) for the moments on short intervals, below and above the freezing transition. The proof is an adaptation of the recursive scheme introduced by Bourgade, Radziwiłł, and one of the authors to prove fine asymptotics for the maximum on intervals of length $$1$$.more » « less
-
We consider scenarios where a very accurate (often small) predictive model using restricted features is available when training a full-featured (often larger) model. This restricted model may be thought of as "side-information", and can come either from an auxiliary dataset or from the same dataset by forcing the restriction. How can the restricted model be useful to the full model? To answer this, we introduce a methodology called Induced Model Matching (IMM). IMM aligns the context-restricted, or induced, version of the large model with the restricted model. We relate IMM to approaches such as noising, which is implicit in addressing the problem, and reverse knowledge distillation from weak teachers, which is explicit but does not exploit restriction being the nature of the weakness. We show that these prior methods can be thought of as approximations to IMM and can be problematic in terms of consistency. Experimentally, we first motivate IMM using logistic regression as a toy example. We then explore it in language modeling, the application that initially inspired it, and demonstrate it on both LSTM and transformer full models, using bigrams as restricted models. We lastly give a simple RL example, which shows that POMDP policies can help learn better MDP policies. The IMM principle is thus generally applicable in common scenarios where restricted data is cheaper to collect or restricted models are easier to learn.more » « less
An official website of the United States government
