Strictly serializable (linearizable) services appear to execute transactions (operations) sequentially, in an order consistent with real time. This restricts a transaction's (operation's) possible return values and in turn, simplifies application programming. In exchange, strictly serializable (linearizable) services perform worse than those with weaker consistency. But switching to such services can break applications. This work introduces two new consistency models to ease this trade-off: regular sequential serializability (RSS) and regular sequential consistency (RSC). They are just as strong for applications: we prove any application invariant that holds when using a strictly serializable (linearizable) service also holds when using an RSS (RSC) service. Yet they relax the constraints on services---they allow new, better-performing designs. To demonstrate this, we design, implement, and evaluate variants of two systems, Spanner and Gryff, relaxing their consistency to RSS and RSC, respectively. The new variants achieve better read-only transaction and read tail latency than their counterparts.
more »
« less
Another construction of edge-regular graphs with regular cliques
- Award ID(s):
- 1649807
- PAR ID:
- 10093891
- Date Published:
- Journal Name:
- Discrete Mathematics
- ISSN:
- 0012-365X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We show that, in good residual characteristic, most supercuspidal representations of a tamely ramified reductive p-adic group G arise from pairs (S,\theta), where S is a tame elliptic maximal torus of G, and \theta is a character of S satisfying a simple root-theoretic property. We then give a new expression for the roots of unity that appear in the Adler-DeBacker-Spice character formula for these supercuspidal representations and use it to show that this formula bears a striking resemblance to the character formula for discrete series representations of real reductive groups. Led by this, we explicitly construct the local Langlands correspondence for these supercuspidal representations and prove stability and endoscopic transfer in the case of toral representations. In large residual characteristic this gives a construction of the local Langlands correspondence for almost all supercuspidal representations of reductive p-adic groups.more » « less
-
Following Chaudhuri, Sankaranarayanan, and Vardi, we say that a function f:[0,1]→[0,1] is r-regular if there is a Büchi automaton that accepts precisely the set of base r∈N representations of elements of the graph of f. We show that a continuous r-regular function f is locally affine away from a nowhere dense, Lebesgue null, subset of [0,1]. As a corollary we establish that every differentiable r-regular function is affine. It follows that checking whether an r-regular function is differentiable is in PSPACE. Our proofs rely crucially on connections between automata theory and metric geometry developed by Charlier, Leroy, and Rigo.more » « less
-
Gurfinkel, Arie; Ganesh, Vijay (Ed.)In reinforcement learning, an agent incrementally refines a behavioral policy through a series of episodic interactions with its environment. This process can be characterized as explicit reinforcement learning, as it deals with explicit states and concrete transitions. Building upon the concept of symbolic model checking, we propose a symbolic variant of reinforcement learning, in which sets of states are represented through predicates and transitions are represented by predicate transformers. Drawing inspiration from regular model checking, we choose regular languages over the states as our predicates, and rational transductions as predicate transformations. We refer to this framework as regular reinforcement learning, and study its utility as a symbolic approach to reinforcement learning. Theoretically, we establish results around decidability, approximability, and efficient learnability in the context of regular reinforcement learning. Towards practical applications, we develop a deep regular reinforcement learning algorithm, enabled by the use of graph neural networks. We showcase the applicability and effectiveness of (deep) regular reinforcement learning through empirical evaluation on a diverse set of case studies.more » « less
An official website of the United States government

