This content will become publicly available on January 5, 2025
We present
The development proceeds by first introducing the
- Award ID(s):
- 1901381
- NSF-PAR ID:
- 10526179
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Programming Languages
- Volume:
- 8
- Issue:
- POPL
- ISSN:
- 2475-1421
- Page Range / eLocation ID:
- 273 to 301
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions , we argue that the cost structure of programs motivates a phase distinction between intension and extension . Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis . We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.more » « less
-
This article presents a type-based analysis for deriving upper bounds on the expected execution cost of probabilistic programs. The analysis is naturally compositional, parametric in the cost model, and supports higher-order functions and inductive data types. The derived bounds are multivariate polynomials that are functions of data structures. Bound inference is enabled by local type rules that reduce type inference to linear constraint solving. The type system is based on the potential method of amortized analysis and extends automatic amortized resource analysis (AARA) for deterministic programs. A main innovation is that bounds can contain symbolic probabilities, which may appear in data structures and function arguments. Another contribution is a novel soundness proof that establishes the correctness of the derived bounds with respect to a distribution-based operational cost semantics that also includes nontrivial diverging behavior. For cost models like time, derived bounds imply termination with probability one. To highlight the novel ideas, the presentation focuses on linear potential and a core language. However, the analysis is implemented as an extension of Resource Aware ML and supports polynomial bounds and user defined data structures. The effectiveness of the technique is evaluated by analyzing the sample complexity of discrete distributions and with a novel average-case estimation for deterministic programs that combines expected cost analysis with statistical methods.more » « less
-
We present metalanguages for developing synthetic cost-aware denotational semantics of programming languages. Extending recent advances by Niu et al. in cost and behavioral verification in dependent type theory, we define two successively more expressive metalanguages for studying cost-aware metatheory. We construct synthetic denotational models of the simply-typed lambda calculus and Modernized Algol, a language with first-order store and while loops, and show that they satisfy a cost-aware generalization of the classic Plotkin-type computational adequacy theorem. Moreover, by developing our proofs in a synthetic language of phase-separated constructions of intension and extension, our results easily restrict to the corresponding extensional theorems. Consequently, our work provides a positive answer to the conjecture raised in op. cit. and contributes a framework for cost-aware programming, verification, and metatheory.more » « less
-
Abstract We present a probabilistic extension of action language ${\cal BC}$+$ . Just like ${\cal BC}$+$ is defined as a high-level notation of answer set programs for describing transition systems, the proposed language, which we call p ${\cal BC}$+$ , is defined as a high-level notation of LP MLN programs—a probabilistic extension of answer set programs. We show how probabilistic reasoning about transition systems, such as prediction, postdiction, and planning problems, as well as probabilistic diagnosis for dynamic domains, can be modeled in p ${\cal BC}$+$ and computed using an implementation of LP MLN .more » « less
-
Numerous quantum algorithms require the use of quantum error correction to overcome the intrinsic unreliability of physical qubits. However, quantum error correction imposes a unique performance bottleneck, known as
T -complexity, that can make an implementation of an algorithm as a quantum program run more slowly than on idealized hardware. In this work, we identify that programming abstractions for control flow, such as the quantum if-statement, can introduce polynomial increases in theT -complexity of a program. If not mitigated, this slowdown can diminish the computational advantage of a quantum algorithm.To enable reasoning about the costs of control flow, we present a cost model that a developer can use to accurately analyze the
T -complexity of a program under quantum error correction and pinpoint the sources of slowdown. To enable the mitigation of these costs, we present a set of program-level optimizations that a developer can use to rewrite a program to reduce itsT -complexity, predict theT -complexity of the optimized program using the cost model, and then compile it to an efficient circuit via a straightforward strategy.We implement the program-level optimizations in Spire, an extension of the Tower quantum compiler. Using a set of 11 benchmark programs that use control flow, we empirically show that the cost model is accurate, and that Spire’s optimizations recover programs that are asymptotically efficient, meaning their runtime
T -complexity under error correction is equal to their time complexity on idealized hardware.Our results show that optimizing a program before it is compiled to a circuit can yield better results than compiling the program to an inefficient circuit and then invoking a quantum circuit optimizer found in prior work. For our benchmarks, only 2 of 8 tested quantum circuit optimizers recover circuits with asymptotically efficient
T -complexity. Compared to these 2 optimizers, Spire uses 54×–2400× less compile time.