Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In a Stackelberg game, a leader commits to a randomized strategy and a follower chooses their best strategy in response. We consider an extension of a standard Stackelberg game, called a discrete-time dynamic Stackelberg game, that has an underlying state space that affects the leader’s rewards and available strategies and evolves in a Markovian manner depending on both the leader and follower’s selected trategies. Although standard Stackelberg games have been utilized to improve scheduling in security domains, their deployment is often limited by requiring complete information of the follower’s utility function. In contrast, we consider scenarios where the follower’s utility function is unknown to the leader; however, it can be linearly parameterized. Our objective is then to provide an algorithm that prescribes a randomized strategy to the leader at each step of the game based on observations of how the follower responded in previous steps. We design an online learning algorithm that, with high probability, is no-regret, i.e., achieves a regret bound (when compared to the best policy in hindsight), which is sublinear in the number of time steps; the degree of sublinearity depends on the number of features representing the follower’s utility function. The regret of the proposed learning algorithm is independent of the size of the state space and polynomial in the rest of the parameters of the game. We show that the proposed learning algorithm outperforms existing model-free reinforcement learning approaches.more » « less
-
We consider a group of agents that estimate their locations in an environment through sensor measurements and aim to transmit a message signal to a client via collaborative beamforming. Assuming that the localization error of each agent follows a Gaussian distribution, we study the problem of forming a reliable communication link between the agents and the client that achieves a desired signal-to-noise ratio (SNR) at the client with minimum variability. In particular, we develop a greedy subset selection algorithm that chooses only a subset of the agents to transmit the signal so that the variance of the received SNR is minimized while the expected SNR exceeds a desired threshold. We show the optimality of the proposed algorithm when the agents’ localization errors satisfy certain sufficient conditions that are characterized in terms of the carrier frequency.more » « less