skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Toward a mechanistic explanation of phenotypic evolution: The need for a theory of theory integration
Award ID(s):
1656284
PAR ID:
10087241
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Experimental Zoology Part B: Molecular and Developmental Evolution
Volume:
330
Issue:
1
ISSN:
1552-5007
Page Range / eLocation ID:
5 to 14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Compositionality is at the core of programming languages research and has become an important goal toward scalable verification of large systems. Despite that, there is no compositional account oflinearizability, the gold standard of correctness for concurrent objects. In this article, we develop a compositional semantics for linearizable concurrent objects. We start by showcasing a common issue, which is independent of linearizability, in the construction of compositional models of concurrent computation: interaction with the neutral element for composition can lead to emergent behaviors, a hindrance to compositionality. Category theory provides a solution for the issue in the form of the Karoubi envelope. Surprisingly, and this is the main discovery of our work, this abstract construction is deeply related to linearizability and leads to a novel formulation of it. Notably, this new formulation neither relies on atomicity nor directly upon happens-before ordering and is only possiblebecauseof compositionality, revealing that linearizability and compositionality are intrinsically related to each other. We use this new, and compositional, understanding of linearizability to revisit much of the theory of linearizability, providing novel, simple, algebraic proofs of thelocalityproperty and of an analogue of the equivalence withobservational refinement. We show our techniques can be used in practice by connecting our semantics with a simple program logic that is nonetheless sound concerning this generalized linearizability. 
    more » « less
  2. In this paper we revive a special, less-common, variational principle in analytical mechanics (Hertz’ principle of least curvature) to develop a novel variational analogue of Euler's equations for the dynamics of an ideal fluid. The new variational formulation is fundamentally different from those formulations based on Hamilton's principle of least action. Using this new variational formulation, we generalize the century-old problem of the flow over a two-dimensional body; we developed a variational closure condition that is, unlike the Kutta condition, derived from first principles. The developed variational principle reduces to the classical Kutta–Zhukovsky condition in the special case of a sharp-edged airfoil, which challenges the accepted wisdom about the Kutta condition being a manifestation of viscous effects. Rather, we found that it represents conservation of momentum. Moreover, the developed variational principle provides, for the first time, a theoretical model for lift over smooth shapes without sharp edges where the Kutta condition is not applicable. We discuss how this fundamental divergence from current theory can explain discrepancies in computational studies and experiments with superfluids. 
    more » « less
  3. We consider a model of selective prediction, where the prediction algorithm is given a data sequence in an online fashion and asked to predict a pre-specified statistic of the upcoming data points. The algorithm is allowed to choose when to make the prediction as well as the length of the prediction window, possibly depending on the observations so far. We prove that, even without any distributional assumption on the input data stream, a large family of statistics can be estimated to non-trivial accuracy. To give one concrete example, suppose that we are given access to an arbitrary binary sequence x1, . . . , xn of length n. Our goal is to accurately predict the average observation, and we are allowed to choose the window over which the prediction is made: for some t < n and m < n − t, after seeing t observations we predict the average of x_{t+1},..., x{t+m}. This particular problem was first studied in [Dru13] and referred to as the “density prediction game”. We show that the expected squared error of our prediction can be bounded by O(1/log n) and prove a matching lower bound, which resolves an open question raised in [Dru13]. This result holds for any sequence (that is not adaptive to when the prediction is made, or the predicted value), and the expectation of the error is with respect to the randomness of the prediction algorithm. Our results apply to more general statistics of a sequence of observations, and we highlight several open directions for future work. 
    more » « less
  4. How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its “learning curve”, that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy. In this paper, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this paper is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case. For concreteness, we consider in this paper only the realizable case, though analogous results are expected to extend to more general learning scenarios. 
    more » « less