skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 2, 2025

Title: Testing Randomness by Matching Pennies
In the game of Matching Pennies, Alice and Bob each hold a penny, and at every tick of the clock they simultaneously display the head or the tail sides of their coins. If they both display the same side, then Alice wins Bob's penny; if they display different sides, then Bob wins Alice's penny. To avoid giving the opponent a chance to win, both players seem to have nothing else to do but to randomly play heads and tails with equal frequencies. However, while not losing in this game is easy, not missing an opportunity to win is not. Randomizing your own moves can be made easy. Recognizing when the opponent's moves are not random can be arbitrarily hard. The notion of randomness is central in game theory, but it is usually taken for granted. The notion of outsmarting is not central in game theory, but it is central in the practice of gaming. We pursue the idea that these two notions can be usefully viewed as two sides of the same coin. The resulting analysis suggests that the methods for strategizing in gaming and security, and for randomizing in computation, can be leveraged against each other. 2010 Mathematics Subject Classification. 03D32,91A26,91A26, 68Q32.  more » « less
Award ID(s):
1662487
PAR ID:
10566753
Author(s) / Creator(s):
; ;
Publisher / Repository:
Mathematical Reviews and Zentralblatt MATH
Date Published:
Journal Name:
Sarajevo Journal of Mathematics
Volume:
20
Issue:
1
ISSN:
1840-0655
Page Range / eLocation ID:
25 to 45
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Boosting engagement with educational software has been promoted as a means of improving student performance. Various engagement factors have been explored, including choice, personalization, badges, bonuses, and competition. We examine two promising and relatively understudied manipulations from the realm of gambling: the nearwin effect and anticipation. The near-win effect occurs when an individual comes close to achieving a goal, e.g., getting two cherries and a lemon in a slot machine. Anticipation refers to the build-up of suspense as an outcome is revealed, e.g., revealing cherry-cherry-lemon in that order drives expectations of winning more than revealing lemon-cherrycherry. Gambling psychologists have long studied how near-wins affect engagement in pure-chance games but it is difficult to do the same in an educational context where outcomes are based on skill. In this paper, we manipulate the display of outcomes in a manner that allows us to introduce artificial near-wins largely independent of a student’s performance. In a study involving thousands of students using an online math tutor, we examine how this manipulation affects a behavioral measure of engagement—whether or not a student repeats a lesson. We find a near-win effect on engagement when the ‘win’ indicates to the student that they have attained critical competence on a lesson—the competence that allows them to continue to the next lesson. Nonetheless, when we experimentally induce near wins in a randomized controlled trial, we do not obtain a reliable effect of the near win. We discuss this mismatch of results in terms of the role of anticipation on making near wins effective. We conclude by describing manipulations that might increase the effect of near wins on engagement. 
    more » « less
  2. Guruswami, Venkatesan (Ed.)
    We present novel lower bounds in the Merlin-Arthur (MA) communication model and the related annotated streaming or stream verification model. The MA communication model extends the classical communication model by introducing an all-powerful but untrusted player, Merlin, who knows the inputs of the usual players, Alice and Bob, and attempts to convince them about the output. We focus on the online MA (OMA) model where Alice and Merlin each send a single message to Bob, who needs to catch Merlin if he is dishonest and announce the correct output otherwise. Most known functions have OMA protocols with total communication significantly smaller than what would be needed without Merlin. In this work, we introduce the notion of non-trivial-OMA complexity of a function. This is the minimum total communication required when we restrict ourselves to only non-trivial protocols where Alice sends Bob fewer bits than what she would have sent without Merlin. We exhibit the first explicit functions that have this complexity superlinear - even exponential - in their classical one-way complexity: this means the trivial protocol, where Merlin communicates nothing and Alice and Bob compute the function on their own, is exponentially better than any non-trivial protocol in terms of total communication. These OMA lower bounds also translate to the annotated streaming model, the MA analogue of single-pass data streaming. We show large separations between the classical streaming complexity and the non-trivial annotated streaming complexity (for the analogous notion in this setting) of fundamental problems such as counting distinct items, as well as of graph problems such as connectivity and k-connectivity in a certain edge update model called the support graph turnstile model that we introduce here. 
    more » « less
  3. This paper studies language-based opacity enforcement in a two-player, zero-sum game on a graph. In this game, player 1 (P1) wins if he can achieve a secret temporal goal described by the language of a finite automaton, no matter what strategy the opponent player 2 (P2) selects. In addition, P1 aims to win while making its goal opaque to a passive observer with imperfect information. However, P2 colludes with the observer to reveal P1's secret whenever P2 cannot prevent P1 from achieving its goal, and therefore, opacity must be enforced against P2. We show that a winning and opacity-enforcing strategy for P1 can be computed by reducing the problem to solving a reachability game augmented with the observer's belief states. Furthermore, if such a strategy does not exist, winning for P1 must entail the price of revealing his secret to the observer. We demonstrate our game-theoretic solution of opacity-enforcement control through a small illustrative example and in a robot motion planning problem. 
    more » « less
  4. We introduce a graph Ramsey game called Ramsey, Paper, Scissors. This game has two players, Proposer and Decider. Starting from an empty graph onnvertices, on each turn Proposer proposes a potential edge and Decider simultaneously decides (without knowing Proposer's choice) whether to add it to the graph. Proposer cannot propose an edge which would create a triangle in the graph. The game ends when Proposer has no legal moves remaining, and Proposer wins if the final graph has independence number at leasts. We prove a threshold phenomenon exists for this game by exhibiting randomized strategies for both players that are optimal up to constants. Namely, there exist constants 0 < A < Bsuch that (under optimal play) Proposer wins with high probability if, while Decider wins with high probability if. This is a factor oflarger than the lower bound coming from the off‐diagonal Ramsey numberr(3,s). 
    more » « less
  5. We experimentally demonstrate that we can detect correlated errors in a twin-field quantum key distribution (TFQKD) system by using a technique that is related to self-consistent tomography. We implement a TFQKD system based on a fiber-Sagnac loop, in which Alice and Bob encode information in the phase of weak coherent states that propagate in opposite directions around the loop. These states interfere as they exit the loop and are detected by a third party, Charlie, who reports the results of their measurements to Alice and Bob. We find that it is possible for Alice and Bob to detect correlated state-preparation and measurement errors while trusting only their own individual states, and without trusting Charlie’s measurements. 
    more » « less