It is believed that Euclidean Yang–Mills theories behave like the massless Gaussian free field (GFF) at short distances. This makes it impossible to define the main observables for these theories—the Wilson loop observables—in dimensions greater than two, because line integrals of the GFF do not exist in such dimensions. Taking forward a proposal of Charalambous and Gross, this article shows that it is possible to define Euclidean Yang–Mills theories on the 3D unit torus as ‘random distributional gauge orbits’, provided that they indeed behave like the GFF in a certain sense. One of the main technical tools is the existence of the Yang–Mills heat flow on the 3D torus starting from GFFlike initial data, which is established in a companion paper. A key consequence of this construction is that under the GFF assumption, one can define a notion of ‘regularized Wilson loop observables’ for Euclidean Yang–Mills theories on the 3D unit torus.
Determinism
This article is about deterministic models, what they are, why they are useful, and what their limitations are. First, the article emphasizes that determinism is a property of models, not of physical systems. Whether a model is deterministic or not depends on how one defines the inputs and behavior of the model. To define behavior, one has to define an observer. The article compares and contrasts two classes of ways to define an observer, one based on the notion of “state” and another that more flexibly defines the observables. The notion of “state” is shown to be problematic and lead to nondeterminism that is avoided when the observables are defined differently. The article examines determinism in models of the physical world. In what may surprise many readers, it shows that Newtonian physics admits nondeterminism and that quantum physics may be interpreted as a deterministic model. Moreover, it shows that both relativity and quantum physics undermine the notion of “state” and therefore require more flexible ways of defining observables. Finally, the article reviews results showing that sufficiently rich sets of deterministic models are incomplete. Specifically, nondeterminism is inescapable in any system of models rich enough to encompass Newton’s laws.
more »
« less
 Award ID(s):
 1836601
 NSFPAR ID:
 10311564
 Date Published:
 Journal Name:
 ACM Transactions on Embedded Computing Systems
 Volume:
 20
 Issue:
 5
 ISSN:
 15399087
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract 
Actors have become widespread in programming languages and programming frameworks focused on parallel and distributed computing. While actors provide a more disciplined model for concurrency than threads, their interactions, if not constrained, admit nondeterminism. As a consequence, actor programs may exhibit unintended behaviors and are less amenable to rigorous testing. We show that nondeterminism can be handled in a number of ways, surveying dataflow dialects, process networks, synchronousreactive models, and discreteevent models. These existing approaches, however, tend to require centralized control, pose challenges to modular system design, or introduce a single point of failure. We describe “reactors,” a new coordination model that combines ideas from several of the aforementioned approaches to enable determinism while preserving much of the style of actors. Reactors promote modularity and allow for distributed execution. By using a logical model of time that can be associated with physical time, reactors also admit control over timing.more » « less

Many programming languages and programming frameworks focus on parallel and distributed computing. Several frameworks are based on actors, which provide a more disciplined model for concurrency than threads. The interactions between actors, however, if not constrained, admit nondeterminism. As a consequence, actor programs may exhibit unintended behaviors and are less amenable to rigorous testing. We show that nondeterminism can be handled in a number of ways, surveying dataflow dialects, process networks, synchronousreactive models, and discreteevent models. These existing approaches, however, tend to require centralized control, pose challenges to modular system design, or introduce a single point of failure. We describe “reactors,” a new coordination model that combines ideas from several of these approaches to enable determinism while preserving much of the style of actors. Reactors promote modularity and allow for distributed execution. By using a logical model of time that can be associated with physical time, reactors also provide control over timing. Reactors also expose parallelism that can be exploited on multicore machines and in distributed configurations without compromising determinacy.more » « less

null (Ed.)Abstract Imagelike data from quantum systems promises to offer greater insight into the physics of correlated quantum matter. However, the traditional framework of condensed matter physics lacks principled approaches for analyzing such data. Machine learning models are a powerful theoretical tool for analyzing imagelike data including manybody snapshots from quantum simulators. Recently, they have successfully distinguished between simulated snapshots that are indistinguishable from one and two point correlation functions. Thus far, the complexity of these models has inhibited new physical insights from such approaches. Here, we develop a set of nonlinearities for use in a neural network architecture that discovers features in the data which are directly interpretable in terms of physical observables. Applied to simulated snapshots produced by two candidate theories approximating the doped FermiHubbard model, we uncover that the key distinguishing features are fourthorder spincharge correlators. Our approach lends itself well to the construction of simple, versatile, endtoend interpretable architectures, thus paving the way for new physical insights from machine learning studies of experimental and numerical data.more » « less

null (Ed.)Dynamical blackhole scenarios have been developed in loop quantum gravity in various ways, combining results from mini and midisuperspace models. In the past, the underlying geometry of spacetime has often been expressed in terms of line elements with metric components that differ from the classical solutions of general relativity, motivated by modified equations of motion and constraints. However, recent results have shown by explicit calculations that most of these constructions violate general covariance and slicing independence. The proposed line elements and blackhole models are therefore ruled out. The only known possibility to escape this sentence is to derive not only modified metric components but also a new spacetime structure which is covariant in a generalized sense. Formally, such a derivation is made available by an analysis of the constraints of canonical gravity, which generate deformations of hypersurfaces in spacetime, or generalized versions if the constraints are consistently modified. A generic consequence of consistent modifications in effective theories suggested by loop quantum gravity is signature change at high density. Signature change is an important ingredient in longterm models of black holes that aim to determine what might happen after a black hole has evaporated. Because this effect changes the causal structure of spacetime, it has crucial implications for blackhole models that have been missed in several older constructions, for instance in models based on bouncing blackhole interiors. Such models are ruled out by signature change even if their underlying spacetimes are made consistent using generalized covariance. The causal nature of signature change brings in a new internal consistency condition, given by the requirement of deterministic behavior at low curvature. Even a causally disconnected interior transition, opening back up into the former exterior as some kind of astrophysical white hole, is then ruled out. New versions consistent with both generalized covariance and lowcurvature determinism are introduced here, showing a remarkable similarity with models developed in other approaches, such as the finalstate proposal or the notransition principle obtained from the gaugegravity correspondence.more » « less