Pedestrian flow in densely-populated or congested areas usually presents irregular or turbulent motion state due to competitive behaviors of individual pedestrians, which reduces flow efficiency and raises the risk of crowd accidents. Effective pedestrian flow regulation strategies are highly valuable for flow optimization. Existing studies seek for optimal design of indoor architectural features and spatial placement of pedestrian facilities for the purpose of flow optimization. However, once placed, the stationary facilities are not adaptive to real-time flow changes. In this paper, we investigate the problem of regulating two merging pedestrian flows in a bottleneck area using a mobile robot moving among the pedestrian flows. The pedestrian flows are regulated through dynamic human-robot interaction (HRI) during their collective motion. We adopt an adaptive dynamic programming (ADP) method to learn the optimal motion parameters of the robot in real time, and the resulting outflow through the bottleneck is maximized with the crowd pressure reduced to avoid potential crowd disasters. The proposed algorithm is a data-driven approach that only uses camera observation of pedestrian flows without explicit models of pedestrian dynamics and HRI. Extensive simulation studies are performed in both Matlab and a robotic simulator to verify the proposed approach and evaluate themore »
Robot-Assisted Pedestrian Regulation Based on Deep Reinforcement Learning
Pedestrian regulation can prevent crowd accidents and improve crowd safety in densely populated areas. Recent studies use mobile robots to regulate pedestrian flows for desired collective motion through the effect of passive human-robot interaction (HRI). This paper formulates a robot motion planning problem for the optimization of two merging pedestrian flows moving through a bottleneck exit. To address the challenge of feature representation of complex human motion dynamics under the effect of HRI, we propose using a deep neural network to model the mapping from the image input of pedestrian environments to the output of robot motion decisions. The robot motion planner is trained end-to-end using a deep reinforcement learning algorithm, which avoids hand-crafted feature detection and extraction, thus improving the learning capability for complex dynamic problems. Our proposed approach is validated in simulated experiments, and its performance is evaluated. The results demonstrate that the robot is able to find optimal motion decisions that maximize the pedestrian outflow in different flow conditions, and the pedestrian-accumulated outflow increases significantly compared to cases without robot regulation and with random robot motion.
- Publication Date:
- NSF-PAR ID:
- 10109298
- Journal Name:
- IEEE Transactions on Cybernetics
- Page Range or eLocation-ID:
- 1 to 14
- ISSN:
- 2168-2267
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The objective of this work is to augment the basic abilities of a robot by learning to use sensorimotor primitives to solve complex long-horizon manipulation problems. This requires flexible generative planning that can combine primitive abilities in novel combinations and, thus, generalize across a wide variety of problems. In order to plan with primitive actions, we must have models of the actions: under what circumstances will executing this primitive successfully achieve some particular effect in the world? We use, and develop novel improvements to, state-of-the-art methods for active learning and sampling. We use Gaussian process methods for learning the constraints on skill effectiveness from small numbers of expensive-to-collect training examples. In addition, we develop efficient adaptive sampling methods for generating a comprehensive and diverse sequence of continuous candidate control parameter values (such as pouring waypoints for a cup) during planning. These values become end-effector goals for traditional motion planners that then solve for a full robot motion that performs the skill. By using learning and planning methods in conjunction, we take advantage of the strengths of each and plan for a wide variety of complex dynamic manipulation tasks. We demonstrate our approach in an integrated system, combining traditional robotics primitivesmore »
-
Existing methods for pedestrian motion trajectory prediction are learning and predicting the trajectories in the 2D image space. In this work, we observe that it is much more efficient to learn and predict pedestrian trajectories in the 3D space since the human motion occurs in the 3D physical world and and their behavior patterns are better represented in the 3D space. To this end, we use a stereo camera system to detect and track the human pose with deep neural networks. During pose estimation, these twin deep neural networks satisfy the stereo consistence constraint. We adapt the existing SocialGAN method to perform pedestrian motion trajectory prediction from the 2D to the 3D space. Our extensive experimental results demonstrate that our proposed method significantly improves the pedestrian trajectory prediction performance, outperforming existing state-of-the-art methods.
-
Human-Robot Collaboration (HRC), which envisions a workspace in which human and robot can dynamically collaborate, has been identified as a key element in smart manufacturing. Human action recognition plays a key role in the realization of HRC as it helps identify current human action and provides the basis for future action prediction and robot planning. Despite recent development of Deep Learning (DL) that has demonstrated great potential in advancing human action recognition, one of the key issues remains as how to effectively leverage the temporal information of human motion to improve the performance of action recognition. Furthermore, large volume of training data is often difficult to obtain due to manufacturing constraints, which poses challenge for the optimization of DL models. This paper presents an integrated method based on optical flow and convolutional neural network (CNN)-based transfer learning to tackle these two issues. First, optical flow images, which encode the temporal information of human motion, are extracted and serve as the input to a two-stream CNN structure for simultaneous parsing of spatial-temporal information of human motion. Then, transfer learning is investigated to transfer the feature extraction capability of a pretrained CNN to manufacturing scenarios. Evaluation using engine block assembly confirmed themore »
-
Effective robotic systems must be able to produce desired motion in a sufficiently broad variety of robot states and environmental contexts. Classic control and planning methods achieve such coverage through the synthesis of model-based components. New applications and platforms, such as soft robots, present novel challenges, ranging from richer dynamical behaviors to increasingly unstructured environments. In these setups, derived models frequently fail to express important real-world subtleties. An increasingly popular approach to deal with this issue corresponds to end-to-end machine learning architectures, which adapt to such complexities through a data-driven process. Unfortunately, however, data are not always available for all regions of the operational space, which complicates the extensibility of these solutions. In light of these issues, this paper proposes a reconciliation of classic motion synthesis with modern data-driven tools towards the objective of ``deep coverage''. This notion utilizes the concept of composability, a feature of traditional control and planning methods, over data-derived ``motion elements'', towards generalizable and scalable solutions that adapt to real-world experience.