skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: Scalable Reinforcement Learning for Multiagent Networked Systems
We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a scalable actor critic (SAC) framework that exploits the network structure and finds a localized policy that is an [Formula: see text]-approximation of a stationary point of the objective for some [Formula: see text], with complexity that scales with the local state-action space size of the largest [Formula: see text]-hop neighborhood of the network. We illustrate our model and approach using examples from wireless communication, epidemics, and traffic.  more » « less
Award ID(s):
2105648 2146814 2136197 2106403 1637598 2038603
PAR ID:
10324690
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Operations Research
ISSN:
0030-364X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study multi-agent reinforcement learning (MARL) in a stochastic network of agents. The objective is to find localized policies that maximize the (discounted) global reward. In general, scalability is a challenge in this setting because the size of the global state/action space can be exponential in the number of agents. Scalable algorithms are only known in cases where dependencies are static, fixed and local, e.g., between neighbors in a fixed, time-invariant underlying graph. In this work, we propose a Scalable Actor Critic framework that applies in settings where the dependencies can be non-local and stochastic, and provide a finite-time error bound that shows how the convergence rate depends on the speed of information spread in the network. Additionally, as a byproduct of our analysis, we obtain novel finite-time convergence results for a general stochastic approximation scheme and for temporal difference learning with state aggregation, which apply beyond the setting of MARL in networked systems. 
    more » « less
  2. We address the following natural extension problem for group actions: Given a group [Formula: see text], a subgroup [Formula: see text], and an action of [Formula: see text] on a metric space, when is it possible to extend it to an action of the whole group [Formula: see text] on a (possibly different) metric space? When does such an extension preserve interesting properties of the original action of [Formula: see text]? We begin by formalizing this problem and present a construction of an induced action which behaves well when [Formula: see text] is hyperbolically embedded in [Formula: see text]. Moreover, we show that induced actions can be used to characterize hyperbolically embedded subgroups. We also obtain some results for elementary amenable groups. 
    more » « less
  3. In this paper, we consider federated Q-learning, which aims to learn an optimal Q-function by periodically aggregating local Q-estimates trained on local data alone. Focusing on infinite-horizon tabular Markov decision processes, we provide sample complexity guarantees for both the synchronous and asynchronous variants of federated Q-learning. In both cases, our bounds exhibit a linear speedup with respect to the number of agents and sharper dependencies on other salient problem parameters. Moreover, existing approaches to federated Q-learning adopt an equally-weighted averaging of local Q-estimates, which can be highly sub-optimal in the asynchronous setting since the local trajectories can be highly heterogeneous due to different local behavior policies. Existing sample complexity scales inverse proportionally to the minimum entry of the stationary state-action occupancy distributions over all agents, requiring that every agent covers the entire state-action space. Instead, we propose a novel importance averaging algorithm, giving larger weights to more frequently visited state-action pairs. The improved sample complexity scales inverse proportionally to the minimum entry of the average stationary state-action occupancy distribution of all agents, thus only requiring the agents collectively cover the entire state-action space, unveiling the blessing of heterogeneity. 
    more » « less
  4. We introduce the notion of [Formula: see text]-determinacy for [Formula: see text] a pointclass and [Formula: see text] an equivalence relation on a Polish space [Formula: see text]. A case of particular interest is the case when [Formula: see text] is the (left) shift-action of [Formula: see text] on [Formula: see text] where [Formula: see text] or [Formula: see text]. We show that for all shift actions by countable groups [Formula: see text], and any “reasonable” pointclass [Formula: see text], that [Formula: see text]-determinacy implies [Formula: see text]-determinacy. We also prove a corresponding result when [Formula: see text] is a subshift of finite type of the shift map on [Formula: see text]. 
    more » « less
  5. We propose subsampling as a unified algorithmic technique for submodular maximization in centralized and online settings. The idea is simple: independently sample elements from the ground set and use simple combinatorial techniques (such as greedy or local search) on these sampled elements. We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods. In the usual off-line setting, we present SampleGreedy, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-extendible system using [Formula: see text] evaluation and feasibility queries, where k is the size of the largest feasible set. The approximation ratio improves to p + 1 and p for monotone submodular and linear objectives, respectively. In the streaming setting, we present Sample-Streaming, which obtains a [Formula: see text]-approximation for maximizing a submodular function subject to a p-matchoid using O(k) memory and [Formula: see text] evaluation and feasibility queries per element, and m is the number of matroids defining the p-matchoid. The approximation ratio improves to 4p for monotone submodular objectives. We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks. 
    more » « less