skip to main content


Title: Quantum and Classical Bayesian Agents
We describe a general approach to modeling rational decision-making agents who adopt either quantum or classical mechanics based on the Quantum Bayesian (QBist) approach to quantum theory. With the additional ingredient of a scheme by which the properties of one agent may influence another, we arrive at a flexible framework for treating multiple interacting quantum and classical Bayesian agents. We present simulations in several settings to illustrate our construction: quantum and classical agents receiving signals from an exogenous source, two interacting classical agents, two interacting quantum agents, and interactions between classical and quantum agents. A consistent treatment of multiple interacting users of quantum theory may allow us to properly interpret existing multi-agent protocols and could suggest new approaches in other areas such as quantum algorithm design.  more » « less
Award ID(s):
1818914 2116246
NSF-PAR ID:
10339349
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Quantum
Volume:
6
ISSN:
2521-327X
Page Range / eLocation ID:
713
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Modeling is a significant piece of the puzzle in achieving safety certificates for distributed IoT and cyberphysical systems. From smart home devices to connected and autonomous vehicles, several modeling challenges like dynamic membership of participants and complex interaction patterns, span across application domains. Modeling multiple interacting vehicles can become unwieldy and impractical as vehicles change relative positions and lanes. In this paper, we present an egocentric abstraction for succinctly modeling local interactions among an arbitrary number of agents around an ego agent. These models abstract away the detailed behavior of the other agents and ignore present but physically distant agents. We show that this approach can capture interesting scenarios considered in the responsibility sensitive safety (RSS) framework for autonomous vehicles. As an illustration of how the framework can be useful for analysis, we prove safety of several highway driving scenarios using egocentric models. The proof technique also brings to the forefront the power of a classical verification approach, namely, inductive invariant assertions. We discuss possible generalizations of the analysis to other scenarios and applications. 
    more » « less
  2. Systems engineering processes coordinate the efforts of many individuals to design a complex system. However, the goals of the involved individuals do not necessarily align with the system-level goals. Everyone, including managers, systems engineers, subsystem engineers, component designers, and contractors, is self-interested. It is not currently understood how this discrepancy between organizational and personal goals affects the outcome of complex systems engineering processes. To answer this question, we need a systems engineering theory that accounts for human behavior. Such a theory can be ideally expressed as a dynamic hierarchical network game of incomplete information. The nodes of this network represent individual agents and the edges the transfer of information and incentives. All agents decide independently on how much effort they should devote to a delegated task by maximizing their expected utility; the expectation is over their beliefs about the actions of all other individuals and the moves of nature. An essential component of such a model is the quality function, defined as the map between an agent’s effort and the quality of their job outcome. In the economics literature, the quality function is assumed to be a linear function of effort with additive Gaussian noise. This simplistic assumption ignores two critical factors relevant to systems engineering: (1) the complexity of the design task, and (2) the problem-solving skills of the agent. Systems engineers establish their beliefs about these two factors through years of job experience. In this paper, we encode these beliefs in clear mathematical statements about the form of the quality function. Our approach proceeds in two steps: (1) we construct a generative stochastic model of the delegated task, and (2) we develop a reduced order representation suitable for use in a more extensive game-theoretic model of a systems engineering process. Focusing on the early design stages of a systems engineering process, we model the design task as a function maximization problem and, thus, we associate the systems engineer’s beliefs about the complexity of the task with their beliefs about the complexity of the function being maximized. Furthermore, we associate an agent’s problem solving-skills with the strategy they use to solve the underlying function maximization problem. We identify two agent types: “naïve” (follows a random search strategy) and “skillful” (follows a Bayesian global optimization strategy). Through an extensive simulation study, we show that the assumption of the linear quality function is only valid for small effort levels. In general, the quality function is an increasing, concave function with derivative and curvature that depend on the problem complexity and agent’s skills. 
    more » « less
  3. Quantum algorithms are touted as a way around some classically intractable problems such as the simulation of quantum mechanics. At the end of all quantum algorithms is a quantum measurement whereby classical data is extracted and utilized. In fact, many of the modern hybrid-classical approaches are essentially quantum measurements of states with short quantum circuit descriptions. Here, we compare and examine three methods of extracting the time-dependent one-particle probability density from a quantum simulation: directZ-measurement, Bayesian phase estimation, and harmonic inversion. We have tested these methods in the context of the potential inversion problem of time-dependent density functional theory. Our test results suggest that direct measurement is the preferable method. We also highlight areas where the other two methods may be useful and report on tests using Rigetti's quantum virtual device. This study provides a starting point for imminent applications of quantum computing.

     
    more » « less
  4. null (Ed.)
    Public goods are often either over-consumed in the absence of regulatory mechanisms, or remain completely unused, as in the Covid-19 pandemic, where social distance constraints are enforced to limit the number of people who can share public spaces. In this work, we plug this gap through market based mechanisms designed to efficiently allocate capacity constrained public goods. To design these mechanisms, we leverage the theory of Fisher markets, wherein each agent in the economy is endowed with an artificial currency budget that they can spend to avail public goods. While Fisher markets provide a strong methodological backbone to model resource allocation problems, their applicability is limited to settings involving two types of constraints - budgets of individual buyers and capacities of goods. Thus, we introduce a modified Fisher market, where each individual may have additional physical constraints, characterize its solution properties and establish the existence of a market equilibrium. Furthermore, to account for additional constraints we introduce a social convex optimization problem where we perturb the budgets of agents such that the KKT conditions of the perturbed social problem establishes equilibrium prices. Finally, to compute the budget perturbations we present a fixed point scheme and illustrate convergence guarantees through numerical experiments. Thus, our mechanism, both theoretically and computationally, overcomes a fundamental limitation of classical Fisher markets, which only consider capacity and budget constraints. 
    more » « less
  5. This work presents a conceptual model of collective decision-making processes in engineering systems design to understand the tradeoffs, risks, and dynamics between autonomous but interacting design actors. The proposed approach combines value-driven design, game theory, and simulation experimentation to study how technical and social factors of a design decision-making process facilitate or inhibit collective action. The collective systems design model considers two levels of decision-making: 1) lower-level design value exploration; and 2) upper-level design strategy selection. At the first level, the actors concurrently explore two strategy-specific value spaces with coupled design decision variables. Each collective decision is mapped to an individual scalar measure of preference (design value) that each actor seeks to maximize. At the second level, each of the actor’s design values from the two lower-level design exploration tasks is assigned to one diagonal entry of a normalform game, with off-diagonal elements calculated in function of the “sucker’s” and “temptation-to-defect” payoffs in a classical strategy game scenario. The model helps generate synthetic design problems with specific strategy dynamics between autonomous actors. Results from a preliminary multi-agent simulation study assess the validity of proposed design spaces and generate hypotheses for subsequent studies using human subjects. 
    more » « less