We introduce Flux, which shows how logical refinements can work hand in glove with Rust's ownership mechanisms to yield ergonomic type-based verification of low-level pointer manipulating programs. First, we design a novel refined type system for Rust that indexes mutable locations, with pure (immutable) values that can appear in refinements, and then exploits Rust's ownership mechanisms to abstract sub-structural reasoning about locations within Rust's polymorphic type constructors, while supporting strong updates. We formalize the crucial dependency upon Rust's strong aliasing guarantees by exploiting the Stacked Borrows aliasing model to prove that well-borrowed evaluations of well-typed programs do not get stuck. Second, we implement our type system in Flux, a plug-in to the Rust compiler that exploits the factoring of complex invariants into types and refinements to efficiently synthesize loop annotations-including complex quantified invariants describing the contents of containers-via liquid inference. Third, we evaluate Flux with a benchmark suite of vector manipulating programs and parts of a previously verified secure sandboxing library to demonstrate the advantages of refinement types over program logics as implemented in the state-of-the-art Prusti verifier. While Prusti's more expressive program logic can, in general, verify deep functional correctness specifications, for the lightweight but ubiquitous and important verification use-cases covered by our benchmarks, liquid typing makes verification ergonomic by slashing specification lines by a factor of two, verification time by an order of magnitude, and annotation overhead from up to 24% of code size (average 14%), to nothing at all.
more »
« less
This content will become publicly available on January 7, 2026
Generic Refinement Types
We present Generic Refinement Types: a way to write modular higher-order specifications that abstract invariants over function contracts, while preserving automatic SMT-decidable verification. We show how generic refinements let us write a variety of modular higher-order specifications, including specifications for Rust's traits which abstract over the concrete refinements that hold for different trait implementations. We formalize generic refinements in a core calculus and show how to synthesize the generic instantiations algorithmically at usage sites via a combination of syntactic unification and constraint solving. We give semantics to generic refinements via the intuition that they correspond to ghost parameters, and we formalize this intuition via a type-preserving translation into the polymorphic contract calculus to establish the soundness of generic refinements. Finally, we evaluate generic refinements by implementing them in Flux and using it for two case studies. First, we show how generic refinements let us write modular specifications for Rust's vector indexing API that lets us statically verify the bounds safety of a variety of vector-manipulating benchmarks from the literature. Second, we use generic refinements to refine Rust's Diesel ORM library to track the semantics of the database queries issued by client applications, and hence, statically enforce data-dependent access-control policies in several database-backed web applications.
more »
« less
- PAR ID:
- 10589196
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Programming Languages
- Volume:
- 9
- Issue:
- POPL
- ISSN:
- 2475-1421
- Page Range / eLocation ID:
- 1446 to 1474
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Abstract Gradually typed languages are designed to support both dynamically typed and statically typed programming styles while preserving the benefits of each. Sound gradually typed languages dynamically check types at runtime at the boundary between statically typed and dynamically typed modules. However, there is much disagreement in the gradual typing literature over how to enforce complex types such as tuples, lists, functions and objects. In this paper, we propose a new perspective on the design of runtime gradual type enforcement: runtime type casts exist precisely to ensure the correctness of certain type-based refactorings and optimizations. For instance, for simple types, a language designer might desire that beta-eta equality is valid. We show that this perspective is useful by demonstrating that a cast semantics can be derived from beta-eta equality. We do this by providing an axiomatic account program equivalence in a gradual cast calculus in a logic we call gradual type theory (GTT). Based on Levy’s call-by-push-value, GTT allows us to axiomatize both call-by-value and call-by-name gradual languages. We then show that we can derive the behavior of casts for simple types from the corresponding eta equality principle and the assumption that the language satisfies a property called graduality , also known as the dynamic gradual guarantee. Since we can derive the semantics from the assumption of eta equality, we also receive a useful contrapositive: any observably different cast semantics that satisfies graduality must violate the eta equality. We show the consistency and applicability of our axiomatic theory by proving that a contract-based implementation using the lazy cast semantics gives a logical relations model of our type theory, where equivalence in GTT implies contextual equivalence of the programs. Since GTT also axiomatizes the dynamic gradual guarantee, our model also establishes this central theorem of gradual typing. The model is parameterized by the implementation of the dynamic types, and so gives a family of implementations that validate type-based optimization and the gradual guarantee.more » « less
-
Automated verification can ensure that a web page satisfies accessibility, usability, and design properties regardless of the end user's device, preferences, and assistive technologies. However, state-of-the-art verification tools for layout properties do not scale to large pages because they rely on whole-page analyses and must reason about the entire page using the complex semantics of the browser layout algorithm. This paper introduces and formalizes modular layout proofs. A modular layout proof splits a monolithic verification problem into smaller verification problems, one for each component of a web page. Each component specification can use rely/guarantee-style preconditions to make it verifiable independently of the rest of the page and enabling reuse across multiple pages. Modular layout proofs scale verification to pages an order of magnitude larger than those supported by previous approaches. We prototyped these techniques in a new proof assistant, Troika. In Troika, a proof author partitions a page into components and writes specifications for them. Troika then verifies the specifications, and uses those specifications to verify whole-page properties. Troika also enables the proof author to verify different component specifications with different verification tools, leveraging the strengths of each. In a case study, we use Troika to verify a large web page and demonstrate a speed-up of 13--1469x over existing tools, taking verification time from hours to seconds. We develop a systematic approach to writing Troika proofs and demonstrate it on 8 proofs of properties from prior work to show that modular layout proofs are short, easy to write, and provide benefits over existing tools.more » « less
-
Conditional literals are an expressive Answer Set Programming language construct supported by the solver clingo. Their semantics are currently defined by a translation to infinitary propositional logic, however, we develop an alternative characterization with the SM operator which does not rely on grounding. This allows us to reason about the behavior of a broad class of clingo programs/encodings containing conditional literals, without referring to a particular input/instance of an encoding. We formalize the intuition that conditional literals behave as nested implications, and prove the equivalence of our semantics to those implemented by clingo.more » « less
-
Probabilistic programs often trade accuracy for efficiency, and thus may, with a small probability, return an incorrect result. It is important to obtain precise bounds for the probability of these errors, but existing verification approaches have limitations that lead to error probability bounds that are excessively coarse, or only apply to first-order programs. In this paper we present Eris, a higher-order separation logic for proving error probability bounds for probabilistic programs written in an expressive higher-order language. Our key novelty is the introduction of error credits, a separation logic resource that tracks an upper bound on the probability that a program returns an erroneous result. By representing error bounds as a resource, we recover the benefits of separation logic, including compositionality, modularity, and dependency between errors and program terms, allowing for more precise specifications. Moreover, we enable novel reasoning principles such as expectation-preserving error composition, amortized error reasoning, and error induction. We illustrate the advantages of our approach by proving amortized error bounds on a range of examples, including collision probabilities in hash functions, which allow us to write more modular specifications for data structures that use them as clients. We also use our logic to prove correctness and almost-sure termination of rejection sampling algorithms. All of our results have been mechanized in the Coq proof assistant using the Iris separation logic framework and the Coquelicot real analysis library.more » « less
An official website of the United States government
