In earlier work, we introduced the framework of language-based decisions, the core idea of which was to modify Savage's classical decision-theoretic framework by taking actions to be descriptions in some language, rather than functions from states to outcomes, as they are defined classically. Actions had the form ``if psi then do phi''', where psi and phi$ were formulas in some underlying language, specifying what effects would be brought about under what circumstances. The earlier work allowed only one-step actions. But, in practice, plans are typically composed of a sequence of steps. Here, we extend the earlier framework to \emph{sequential} actions, making it much more broadly applicable. Our technical contribution is a representation theorem in the classical spirit: agents whose preferences over actions satisfy certain constraints can be modeled as if they are expected utility maximizers. As in the earlier work, due to the language-based specification of the actions, the representation theorem requires a construction not only of the probability and utility functions representing the agent's beliefs and preferences, but also the state and outcomes spaces over which these are defined, as well as a ``selection function'' which intuitively captures how agents disambiguate coarse descriptions. The (unbounded) depth of action sequencing adds substantial interest (and complexity!) to the proof.
more »
« less
Language-based decisions
In Savage's classic decision-theoretic framework, actions are formally defined as functions from states to outcomes. But where do the state space and outcome space come from? Expanding on recent work by Blume, Easley, and Halpern [2006], we consider a language-based framework in which actions are identified with (conditional) descriptions in a simple underlying language, while states and outcomes (along with probabilities and utilities) are constructed as part of a representation theorem. Our work expands the role of language from that of Blume, Easley, and Halpern by using it not only for the conditions that determine which actions are taken, but also the effects. More precisely, we take the set of actions to be built from those of the form do(phi), for formulas phi in the underlying language. This presents a problem: how do we interpret the result of do(phi) when phi is underspecified (i.e., compatible with multiple states)? We answer this using tools familiar from the semantics of counterfactuals; roughly speaking, do(phi) maps each state to the ``closest'' phi-state. This notion of ``closest'' is also something we construct as part of the representation theorem; in effect, then, we prove that (under appropriate assumptions) the agent is acting as if each underspecified action is first made definite and then evaluated (i.e., by maximizing expected utility). Of course, actions in the real world are often not presented in a fully precise manner, yet agents reason about and form preferences among them all the same. Our work brings the abstract tools of decision theory into closer contact with such real-world scenarios.
more »
« less
- Award ID(s):
- 1703846
- PAR ID:
- 10238280
- Date Published:
- Journal Name:
- Proceedings of Eighteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The possibility of achieving and controlling scalable classically entangled, i.e., inseparable, multipartite states, would fundamentally challenge the advantages of quantum systems in harnessing the power of complexity in information science. Here, we investigate experimentally the extent of classical entanglement in a $$16$$ 16 acoustic qubit-analogue platform. The acoustic qubit-analogue, a.k.a., logical phi-bit, results from the spectral partitioning of the nonlinear acoustic field of externally driven coupled waveguides. Each logical phi-bit is a two-level subsystem characterized by two independently measurable phases. The phi-bits are co-located within the same physical space enabling distance independent interactions. We chose a vector state representation of the $$16$$ 16 -phi-bit system which lies in a $${2}^{16}$$ 2 16 -dimensional Hilbert space. The calculation of the entropy of entanglement demonstrates the possibility of achieving inseparability of the vector state and of navigating the corresponding Hilbert space. This work suggests a new direction in harnessing the complexity of classical inseparability in information science.more » « less
-
In multi-agent domains (MADs), an agent's action may not just change the world and the agent's knowledge and beliefs about the world, but also may change other agents' knowledge and beliefs about the world and their knowledge and beliefs about other agents' knowledge and beliefs about the world. The goals of an agent in a multi-agent world may involve manipulating the knowledge and beliefs of other agents' and again, not just their knowledge/belief about the world, but also their knowledge about other agents' knowledge about the world. Our goal is to present an action language (mA+) that has the necessary features to address the above aspects in representing and RAC in MADs. mA+ allows the representation of and reasoning about different types of actions that an agent can perform in a domain where many other agents might be present -- such as world-altering actions, sensing actions, and announcement/communication actions. It also allows the specification of agents' dynamic awareness of action occurrences which has future implications on what agents' know about the world and other agents' knowledge about the world. mA+ considers three different types of awareness: full-, partial- awareness, and complete oblivion of an action occurrence and its effects. This keeps the language simple, yet powerful enough to address a large variety of knowledge manipulation scenarios in MADs. The semantics of mA+ relies on the notion of state, which is described by a pointed Kripke model and is used to encode the agent's knowledge and the real state of the world. It is defined by a transition function that maps pairs of actions and states into sets of states. We illustrate properties of the action theories, including properties that guarantee finiteness of the set of initial states and their practical implementability. Finally, we relate mA+ to other related formalisms that contribute to RAC in MADs.more » « less
-
The classic graphical Cheeger inequalities state that if $$M$$ is an $$n\times n$$ \emph{symmetric} doubly stochastic matrix, then \[ \frac{1-\lambda_{2}(M)}{2}\leq\phi(M)\leq\sqrt{2\cdot(1-\lambda_{2}(M))} \] where $$\phi(M)=\min_{S\subseteq[n],|S|\leq n/2}\left(\frac{1}{|S|}\sum_{i\in S,j\not\in S}M_{i,j}\right)$$ is the edge expansion of $$M$$, and $$\lambda_{2}(M)$$ is the second largest eigenvalue of $$M$$. We study the relationship between $$\phi(A)$$ and the spectral gap $$1-\re\lambda_{2}(A)$$ for \emph{any} doubly stochastic matrix $$A$$ (not necessarily symmetric), where $$\lambda_{2}(A)$$ is a nontrivial eigenvalue of $$A$$ with maximum real part. Fiedler showed that the upper bound on $$\phi(A)$$ is unaffected, i.e., $$\phi(A)\leq\sqrt{2\cdot(1-\re\lambda_{2}(A))}$$. With regards to the lower bound on $$\phi(A)$$, there are known constructions with \[ \phi(A)\in\Theta\left(\frac{1-\re\lambda_{2}(A)}{\log n}\right), \] indicating that at least a mild dependence on $$n$$ is necessary to lower bound $$\phi(A)$$. In our first result, we provide an \emph{exponentially} better construction of $$n\times n$$ doubly stochastic matrices $$A_{n}$$, for which \[ \phi(A_{n})\leq\frac{1-\re\lambda_{2}(A_{n})}{\sqrt{n}}. \] In fact, \emph{all} nontrivial eigenvalues of our matrices are $$0$$, even though the matrices are highly \emph{nonexpanding}. We further show that this bound is in the correct range (up to the exponent of $$n$$), by showing that for any doubly stochastic matrix $$A$$, \[ \phi(A)\geq\frac{1-\re\lambda_{2}(A)}{35\cdot n}. \] As a consequence, unlike the symmetric case, there is a (necessary) loss of a factor of $$n^{\alpha}$ for $$\frac{1}{2}\leq\alpha\leq1$$ in lower bounding $$\phi$$ by the spectral gap in the nonsymmetric setting. Our second result extends these bounds to general matrices $$R$$ with nonnegative entries, to obtain a two-sided \emph{gapped} refinement of the Perron-Frobenius theorem. Recall from the Perron-Frobenius theorem that for such $$R$$, there is a nonnegative eigenvalue $$r$$ such that all eigenvalues of $$R$$ lie within the closed disk of radius $$r$$ about $$0$$. Further, if $$R$$ is irreducible, which means $$\phi(R)>0$$ (for suitably defined $$\phi$$), then $$r$$ is positive and all other eigenvalues lie within the \textit{open} disk, so (with eigenvalues sorted by real part), $$\re\lambda_{2}(R)more » « less
-
Learning node representations for networks has attracted much attention recently due to its effectiveness in a variety of applications. This paper focuses on learning node representations for heterogeneous star networks, which have a center node type linked with multiple attribute node types through different types of edges. In heterogeneous star networks, we observe that the training order of different types of edges affects the learning performance signiffcantly. Therefore we study learning curricula for node representation learning in heterogeneous star networks, i.e., learning an optimal sequence of edges of different types for the node representation learning process. We formulate the problem as a Markov decision process, with the action as selecting a speciffc type of edges for learning or terminating the training process, and the state as the sequence of edge types selected so far. The reward is calculated as the performance on external tasks with node representations as features, and the goal is to take a series of actions to maximize the cumulative rewards. We propose an approach based on deep reinforcement learning for this problem. Our approach leverages LSTM models to encode states and further estimate the expected cumulative reward of each state-action pair, which essentially measures the long-term performance of different actions at each state. Experimental results on real-world heterogeneous star networks demonstrate the effectiveness and effciency of our approach over competitive baseline approaches.more » « less
An official website of the United States government

