null
(Ed.)

Regret minimization has proved to be a versatile tool for tree-
form sequential decision making and extensive-form games. In
large two-player zero-sum imperfect-information games, mod-
ern extensions of counterfactual regret minimization (CFR)
are currently the practical state of the art for computing a
Nash equilibrium. Most regret-minimization algorithms for
tree-form sequential decision making, including CFR, require
(i) an exact model of the player’s decision nodes, observation
nodes, and how they are linked, and (ii) full knowledge, at
all times t, about the payoffs—even in parts of the decision
space that are not encountered at time t. Recently, there has
been growing interest towards relaxing some of those restric-
tions and making regret minimization applicable to settings for
which reinforcement learning methods have traditionally been
used—for example, those in which only black-box access to
the environment is available. We give the first, to our knowl-
edge, regret-minimization algorithm that guarantees sublinear
regret with high probability even when requirement (i)—and
thus also (ii)—is dropped. We formalize an online learning
setting in which the strategy space is not known to the agent
and gets revealed incrementally whenever the agent encoun-
ters new decision points. We give an efficient algorithm that
achieves O(T 3/4) regret with high probability for that setting,
even when the agent faces an adversarial environment. Our
experiments show it significantly outperforms the prior algo-
rithms for the problem, which do not have such guarantees. It
can be used in any application for which regret minimization
is useful: approximating Nash equilibrium or quantal response
equilibrium, approximating coarse correlated equilibrium in
multi-player games, learning a best response, learning safe
opponent exploitation, and online play against an unknown
opponent/environment.

more »
« less