%AWei, Xiaohan%AYu, Hao%ANeely, Michael%D2018%I %K %MOSTI ID: 10113344 %PMedium: X %TOnline Learning in Weakly Coupled Markov Decision Processes: A Convergence Time Study %XWe consider multiple parallel Markov decision processes (MDPs) coupled by global constraints, where the time varying objective and constraint functions can only be observed after the decision is made. Special attention is given to how well the decision maker can perform in T slots, starting from any state, compared to the best feasible randomized stationary policy in hindsight. We develop a new distributed online algorithm where each MDP makes its own decision each slot after observing a multiplier computed from past information. While the scenario is significantly more challenging than the classical online learning context, the algorithm is shown to have a tight O( T ) regret and constraint viola- tions simultaneously. To obtain such a bound, we combine several new ingredients including ergodicity and mixing time bound in weakly coupled MDPs, a new regret analysis for online constrained optimization, a drift analysis for queue processes, and a perturbation analysis based on Farkas’ Lemma. Country unknown/Code not availableOSTI-MSA