skip to main content


Title: Deep Deterministic Policy Gradients with Transfer Learning Framework in StarCraft Micromanagement
This paper proposes an intelligent multi-agent approach in a real-time strategy game, StarCraft, based on the deep deterministic policy gradients (DDPG) techniques. An actor and a critic network are established to estimate the optimal control actions and corresponding value functions, respectively. A special reward function is designed based on the agents' own condition and enemies' information to help agents make intelligent control in the game. Furthermore, in order to accelerate the learning process, the transfer learning techniques are integrated into the training process. Specifically, the agents are trained initially in a simple task to learn the basic concept for the combat, such as detouring moving, avoiding and joining attacking. Then, we transfer this experience to the target task with a complex and difficult scenario. From the experiment, it is shown that our proposed algorithm with transfer learning can achieve better performance.  more » « less
Award ID(s):
1947418 1850240
NSF-PAR ID:
10134286
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 IEEE International Conference on Electro Information Technology (EIT)
Page Range / eLocation ID:
410 to 415
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player models over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research. 
    more » « less
  2. null (Ed.)
    To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player models over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research. 
    more » « less
  3. In computational approaches to bounded rationality, metareasoning enables intelligent agents to optimize their own decision-making process in order to produce effective action in a timely manner. While there have been substantial efforts to develop effective meta-level control for anytime algorithms, existing techniques rely on extensive offline work, imposing several critical assumptions that diminish their effectiveness and limit their practical utility in the real world. In order to eliminate these assumptions, adaptive metareasoning enables intelligent agents to adapt to each individual instance of the problem at hand without the need for significant offline preprocessing. Building on our recent work, we first introduce a model-free approach to meta-level control based on reinforcement learning. We then present a meta-level control technique that uses temporal difference learning. Finally, we show empirically that our approach is effective on a common benchmark in meta-level control. 
    more » « less
  4. As humans and robots start to collaborate in close proximity, robots are tasked to perceive, comprehend, and anticipate human partners' actions, which demands a predictive model to describe how humans collaborate with each other in joint actions. Previous studies either simplify the collaborative task as an optimal control problem between two agents or do not consider the learning process of humans during repeated interaction. This idyllic representation is thus not able to model human rationality and the learning process. In this paper, a bounded-rational and game-theoretical human cooperative model is developed to describe the cooperative behaviors of the human dyad. An experiment of a joint object pushing collaborative task was conducted with 30 human subjects using haptic interfaces in a virtual environment. The proposed model uses inverse optimal control (IOC) to model the reward parameters in the collaborative task. The collected data verified the accuracy of the predicted human trajectory generated from the bounded rational model excels the one with a fully rational model. We further provide insight from the conducted experiments about the effects of leadership on the performance of human collaboration. 
    more » « less
  5. In this paper, a distributed swarm control problem is studied for large-scale multi-agent systems (LS-MASs). Different than classical multi-agent systems, an LS-MAS brings new challenges to control design due to its large number of agents. It might be more difficult for developing the appropriate control to achieve complicated missions such as collective swarming. To address these challenges, a novel mixed game theory is developed with a hierarchical learning algorithm. In the mixed game, the LS-MAS is represented as a multi-group, large-scale leader–follower system. Then, a cooperative game is used to formulate the distributed swarm control for multi-group leaders, and a Stackelberg game is utilized to couple the leaders and their large-scale followers effectively. Using the interaction between leaders and followers, the mean field game is used to continue the collective swarm behavior from leaders to followers smoothly without raising the computational complexity or communication traffic. Moreover, a hierarchical learning algorithm is designed to learn the intelligent optimal distributed swarm control for multi-group leader–follower systems. Specifically, a multi-agent actor–critic algorithm is developed for obtaining the distributed optimal swarm control for multi-group leaders first. Furthermore, an actor–critic–mass method is designed to find the decentralized swarm control for large-scale followers. Eventually, a series of numerical simulations and a Lyapunov stability proof of the closed-loop system are conducted to demonstrate the performance of the developed scheme. 
    more » « less