skip to main content

Title: Agent-based model construction using inverse reinforcement learning
Agent-based modeling (ABM) assumes that behavioral rules affecting an agent's states and actions are known. However, discovering these rules is often challenging and requires deep insight about an agent's behaviors. Inverse reinforcement learning (IRL) can complement ABM by providing a systematic way to find behavioral rules from data. IRL frames learning behavioral rules as a problem of recovering motivations from observed behavior and generating rules consistent with these motivations. In this paper, we propose a method to construct an agent-based model directly from data using IRL. We explain each step of the proposed method and describe challenges that may occur during implementation. Our experimental results show that the proposed method can extract rules and construct an agent-based model with rich but concise behavioral rules for agents while still maintaining aggregate-level properties.
Authors:
; ; ; ; ;
Award ID(s):
1650512
Publication Date:
NSF-PAR ID:
10053915
Journal Name:
2017 Winter Simulation Conference (WSC)
Page Range or eLocation-ID:
1264-1275
Sponsoring Org:
National Science Foundation
More Like this
  1. The nexus of food, energy, and water systems (FEWS) has become a salient research topic, as well as a pressing societal and policy challenge. Computational modeling is a key tool in addressing these challenges, and FEWS modeling as a subfield is now established. However, social dimensions of FEWS nexus issues, such as individual or social learning, technology adoption decisions, and adaptive behaviors, remain relatively underdeveloped in FEWS modeling and research. Agent-based models (ABMs) have received increasing usage recently in efforts to better represent and integrate human behavior into FEWS research. A systematic review identified 29 articles in which at leastmore »two food, energy, or water sectors were explicitly considered with an ABM and/or ABM-coupled modeling approach. Agent decision-making and behavior ranged from reactive to active, motivated by primarily economic objectives to multi-criteria in nature, and implemented with individual-based to highly aggregated entities. However, a significant proportion of models did not contain agent interactions, or did not base agent decision-making on existing behavioral theories. Model design choices imposed by data limitations, structural requirements for coupling with other simulation models, or spatial and/or temporal scales of application resulted in agent representations lacking explicit decision-making processes or social interactions. In contrast, several methodological innovations were also noted, which were catalyzed by the challenges associated with developing multi-scale, cross-sector models. Several avenues for future research with ABMs in FEWS research are suggested based on these findings. The reviewed ABM applications represent progress, yet many opportunities for more behaviorally rich agent-based modeling in the FEWS context remain.« less
  2. To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player modelsmore »over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research.« less
  3. To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player modelsmore »over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research.« less
  4. This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonicmore »training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method.« less
  5. The conventional machine learning (ML) and deep learning (DL) methods use large amount of data to construct desirable prediction models in a central fusion center for recognizing human activities. However, such model training encounters high communication costs and leads to privacy infringement. To address the issues of high communication overhead and privacy leakage, we employed a widely popular distributed ML technique called Federated Learning (FL) that generates a global model for predicting human activities by combining participated agents’ local knowledge. The state-of-the-art FL model fails to maintain acceptable accuracy when there is a large number of unreliable agents who canmore »infuse false model, or, resource-constrained agents that fails to perform an assigned computational task within a given time window. We developed an FL model for predicting human activities by monitoring agent’s contributions towards model convergence and avoiding the unreliable and resource-constrained agents from training. We assign a score to each client when it joins in a network and the score is updated based on the agent’s activities during training. We consider three mobile robots as FL clients that are heterogeneous in terms of their resources such as processing capability, memory, bandwidth, battery-life and data volume. We consider heterogeneous mobile robots for understanding the effects of real-world FL setting in presence of resource-constrained agents. We consider an agent unreliable if it repeatedly gives slow response or infuses incorrect models during training. By disregarding the unreliable and weak agents, we carry-out the local training of the FL process on selected agents. If somehow, a weak agent is selected and started showing straggler issues, we leverage asynchronous FL mechanism that aggregate the local models whenever it receives a model update from the agents. Asynchronous FL eliminates the issue of waiting for a long time to receive model updates from the weak agents. To the end, we simulate how we can track the behavior of the agents through a reward-punishment scheme and present the influence of unreliable and resource-constrained agents in the FL process. We found that FL performs slightly worse than centralized models, if there is no unreliable and resource-constrained agent. However, as the number of malicious and straggler clients increases, our proposed model performs more effectively by identifying and avoiding those agents while recognizing human activities as compared to the stateof-the-art FL and ML approaches.« less