skip to main content


Title: Multi-Virtual-Agent Reinforcement Learning for a Stochastic Predator-Prey Grid Environment
Generalization problem of reinforcement learning is crucial especially for dynamic environments. Conventional reinforcement learning methods solve the problems with some ideal assumptions and are difficult to be applied in dynamic environments directly. In this paper, we propose a new multi-virtual- agent reinforcement learning (MVARL) approach for a predator-prey grid game. The designed method can find the optimal solution even when the predator moves. Specifically, we design virtual agents to interact with simulated changing environments in parallel instead of using actual agents. Moreover, a global agent learns information from these virtual agents and interacts with the actual environment at the same time. This method can not only effectively improve the generalization performance of reinforcement learning in dynamic environments, but also reduce the overall computational cost. Two simulation studies are considered in this paper to validate the effectiveness of the designed method. We also compare the results with the conventional reinforcement learning methods. The results indicate that our proposed method can improve the robustness of reinforcement learning method and contribute to the generalization to certain extent.  more » « less
Award ID(s):
2047064 2047010 1947419 1947418
NSF-PAR ID:
10391182
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 International Joint Conference on Neural Networks (IJCNN)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Due to repetitive trial-and-error style interactions between agents and a fixed traffic environment during the policy learning, existing Reinforcement Learning (RL)-based Traffic Signal Control (TSC) methods greatly suffer from long RL training time and poor adaptability of RL agents to other complex traffic environments. To address these problems, we propose a novel Adversarial Inverse Reinforcement Learning (AIRL)-based pre-training method named InitLight, which enables effective initial model generation for TSC agents. Unlike traditional RL-based TSC approaches that train a large number of agents simultaneously for a specific multi-intersection environment, InitLight pretrains only one single initial model based on multiple single-intersection environments together with their expert trajectories. Since the reward function learned by InitLight can recover ground-truth TSC rewards for different intersections at optimality, the pre-trained agent can be deployed at intersections of any traffic environments as initial models to accelerate subsequent overall global RL training. Comprehensive experimental results show that, the initial model generated by InitLight can not only significantly accelerate the convergence with much fewer episodes, but also own superior generalization ability to accommodate various kinds of complex traffic environments. 
    more » « less
  2. In the field of multi-agent autonomous transportation, such as automated payload delivery or highway on-ramp merging, agents routinely exchange knowledge to optimize their shared objective and adapt to environmental novelties through Cooperative Multi-Agent Reinforcement Learning (CMARL) algorithms. This knowledge exchange between agents allows these systems to operate efficiently and adapt to dynamic environments. However, this cooperative learning process is susceptible to adversarial poisoning attacks, as highlighted by contemporary research. Particularly, the poisoning attacks where malicious agents inject deceptive information camouflaged within the differential noise, a pivotal element for differential privacy (DP)-based CMARL algorithms, pose formidable challenges to identify and overcome. The consequences of not addressing this issue are far-reaching, potentially jeopardizing safety-critical operations and the integrity of data privacy in these applications. Existing research has strived to develop anomaly detection-based defense models to counteract conventional poisoning methods. Nonetheless, the recurring necessity for model offloading and retraining with labeled anomalous data undermines their practicality, considering the inherently dynamic nature of the safety-critical autonomous transportation applications. Further, it is imperative to maintain data privacy, ensure high performance, and adapt to environmental changes. Motivated by these challenges, this paper introduces a novel defense mechanism against stealthy adversarial poisoning attacks in the autonomous transportation domain, termed Reinforcing Autonomous Multi-agent Protection through Adversarial Resistance in Transportation (RAMPART). Leveraging a GAN model at each local node, RAMPART effectively filters out malicious advice in an unsupervised manner, whilst generating synthetic samples for each state-action pair to accommodate environmental uncertainties and eliminate the need for labeled training data. Our extensive experimental analysis, conducted in a Private Payload Delivery Network (PPDN) —a common application in the autonomous multi-agent transportation domain—demonstrates thatRAMPART successfully defends against a DP-exploited poisoning attack with a\(30\% \)attack ratio, achieving an F1 score of 0.852 and accuracy of\(96.3\% \)in heavy-traffic environments.

     
    more » « less
  3. Circuit linearity calibration can represent a set of high-dimensional search problems if the observability is limited. For example, linearity calibration of digital-to-time converters (DTC), an essential building block of modern digital phaselocked loops (DPLLs), is an example of a high-dimensional search problem as difficulty of measuring ps delays hinders prior methods that calibrate stage by stage. And, a calibrated DTC can become nonlinear again due to changes in temperature (T) and power supply voltage (V). Prior work reports a deep reinforcement learning framework that is capable of performing DTC linearity calibration with nonlinear calibration banks; however, this prior work does not address maintaining calibration in the face of temperature and supply voltage variations. In this paper, we present a meta-reinforcement learning (RL) method that can enable the RL agent to quickly adapt to a new environment when the temperature and/or voltage change. Inspired by the Style Generative Adversarial Networks (StyleGANs), we propose to treat temperature and voltage changes as the styles of the circuits. In contrast to traditional methods employing circuit sensors to detect changes in T and V, we utilize a machine learning (ML) sensor, to implicitly infer a wide range of environmental changes. The style information from the ML sensor is subsequently injected into a small portion of the policy network, modulating its weights. As a proof of concept, we first designed a 5-bit DTC at the normal voltage (1V) and normal temperature (27℃) corner (NVNT) as the environment. The RL agent begins its training in the NVNT environment. Following this initial phase, the agent is then tasked with adapting to environments with different temperature and supply voltages. Our results show that the proposed technique can reduce the Integral Non-Linearity (INL) to less than 0.5 LSB within 10, 000 search steps in a changed environment. Compared to starting learning from a random initialized policy and a trained policy, the proposed meta-RL approach takes 63% and 47% fewer steps to complete the linearity calibration, respectively. Our method is also applicable to the calibration of many other kinds of analog and RF circuits. 
    more » « less
  4. null (Ed.)
    Deep Reinforcement Learning (DRL) has shown im- pressive performance on domains with visual inputs, in particular various games. However, the agent is usually trained on a fixed environment, e.g. a fixed number of levels. A growing mass of evidence suggests that these trained models fail to generalize to even slight variations of the environments they were trained on. This paper advances the hypothesis that the lack of generalization is partly due to the input representation, and explores how rotation, cropping and translation could increase generality. We show that a cropped, translated and rotated observation can get better generalization on unseen levels of two-dimensional arcade games from the GVGAI framework. The generality of the agents is evaluated on both human-designed and procedurally generated levels. 
    more » « less
  5. Abstract This work presents a deep reinforcement learning (DRL) approach for procedural content generation (PCG) to automatically generate three-dimensional (3D) virtual environments that users can interact with. The primary objective of PCG methods is to algorithmically generate new content in order to improve user experience. Researchers have started exploring the use of machine learning (ML) methods to generate content. However, these approaches frequently implement supervised ML algorithms that require initial datasets to train their generative models. In contrast, RL algorithms do not require training data to be collected a priori since they take advantage of simulation to train their models. Considering the advantages of RL algorithms, this work presents a method that generates new 3D virtual environments by training an RL agent using a 3D simulation platform. This work extends the authors’ previous work and presents the results of a case study that supports the capability of the proposed method to generate new 3D virtual environments. The ability to automatically generate new content has the potential to maintain users’ engagement in a wide variety of applications such as virtual reality applications for education and training, and engineering conceptual design. 
    more » « less