Max-pressure (MP) signal timing is an actuated decentralized signal control policy. Rigorous mathematical studies have proven the stability properties and established the benefits of different MP policies. However, theoretical studies make assumptions about traffic properties that may not represent reality and whose effects are not explored much in the literature under realistic traffic conditions. This study focuses on examining how different variations of MP perform in realistic scenarios and on finding the most practical policy among those for implementation in real roads. Microsimulation models of seven intersections from two corridors, County Road (CR) 30 and CR 109 from Hennepin County, Minnesota were created. Real-life demand and current signal timing data provided by Hennepin County were used to make the simulations as close to reality as possible. In this paper, we compare the performance of current actuated-coordinated signal control with an acyclic MP and two variations of cyclic MP policies. We present the performance of different control policies as delay, throughput, worst lane delay, and number of phase changes. We also present how different parameters affect the performance of the MP policies. We found that better performance can be achieved with a cyclic MP policy by allowing phase skipping when no vehicles are waiting. Our findings also suggest that most of the claimed performance benefits can still be achieved in real-life traffic conditions even with the simplified assumptions made in the theoretical models. In most cases, MP control policies outperformed current signal control.
- Award ID(s):
- 1749200
- NSF-PAR ID:
- 10403492
- Date Published:
- Journal Name:
- Transportation Research Record: Journal of the Transportation Research Board
- ISSN:
- 0361-1981
- Page Range / eLocation ID:
- 036119812211470
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Hybrid traffic which involves both autonomous and human-driven vehicles would be the norm of the autonomous vehicles’ practice for a while. On the one hand, unlike autonomous vehicles, human-driven vehicles could exhibit sudden abnormal behaviors such as unpredictably switching to dangerous driving modes – putting its neighboring vehicles under risks; such undesired mode switching could arise from numbers of human driver factors, including fatigue, drunkenness, distraction, aggressiveness, etc. On the other hand, modern vehicle-to-vehicle (V2V) communication technologies enable the autonomous vehicles to efficiently and reliably share the scarce run-time information with each other [1]. In this paper, we propose, to the best of our knowledge, the first efficient algorithm that can (1) significantly improve trajectory prediction by effectively fusing the run-time information shared by surrounding autonomous vehicles, and can (2) accurately and quickly detect abnormal human driving mode switches or abnormal driving behavior with formal assurance without hurting human drivers’ privacy.
To validate our proposed algorithm, we first evaluate our proposed trajectory predictor on NGSIM and Argoverse datasets and show that our proposed predictor outperforms the baseline methods. Then through extensive experiments on SUMO simulator, we show that our proposed algorithm has great detection performance in both highway and urban traffic. The best performance achieves detection rate of
, average detection delay of 1.2s, and 0 false alarm.\(97.3\% \) -
Connected and automated vehicle (CAV) technology is providing urban transportation managers tremendous opportunities for better operation of urban mobility systems. However, there are significant challenges in real-time implementation as the computational time of the corresponding operations optimization model increases exponentially with increasing vehicle numbers. Following the companion paper (Chen et al. 2021), which proposes a novel automated traffic control scheme for isolated intersections, this study proposes a network-level, real-time traffic control framework for CAVs on grid networks. The proposed framework integrates a rhythmic control method with an online routing algorithm to realize collision-free control of all CAVs on a network and achieve superior performance in average vehicle delay, network traffic throughput, and computational scalability. Specifically, we construct a preset network rhythm that all CAVs can follow to move on the network and avoid collisions at all intersections. Based on the network rhythm, we then formulate online routing for the CAVs as a mixed integer linear program, which optimizes the entry times of CAVs at all entrances of the network and their time–space routings in real time. We provide a sufficient condition that the linear programming relaxation of the online routing model yields an optimal integer solution. Extensive numerical tests are conducted to show the performance of the proposed operations management framework under various scenarios. It is illustrated that the framework is capable of achieving negligible delays and increased network throughput. Furthermore, the computational time results are also promising. The CPU time for solving a collision-free control optimization problem with 2,000 vehicles is only 0.3 second on an ordinary personal computer.more » « less
-
null (Ed.)Abstract: Radio access network (RAN) in 5G is expected to satisfy the stringent delay requirements of a variety of applications. The packet scheduler plays an important role by allocating spectrum resources to user equipments (UEs) at each transmit time interval (TTI). In this paper, we show that optimal scheduling is a challenging combinatorial optimization problem, which is hard to solve within the channel coherence time with conventional optimization methods. Rule-based scheduling methods, on the other hand, are hard to adapt to the time-varying wireless channel conditions and various data request patterns of UEs. Recently, integrating artificial intelligence (AI) into wireless networks has drawn great interest from both academia and industry. In this paper, we incorporate deep reinforcement learning (DRL) into the design of cellular packet scheduling. A delay-aware cell traffic scheduling algorithm is developed to map the observed system state to scheduling decision. Due to the huge state space, a recurrent neural network (RNN) is utilized to approximate the optimal action-policy function. Different from conventional rule-based scheduling methods, the proposed scheme can learn from the interactions with the environment and adaptively choosing the best scheduling decision at each TTI. Simulation results show that the DRL-based packet scheduling can achieve the lowest average delay compared with several conventional approaches. Meanwhile, the UEs' average queue lengths can also be significantly reduced. The developed method also exhibits great potential in real-time scheduling in delay-sensitive scenarios.more » « less
-
Abstract Reinforcement learning-based traffic signal control systems (RLTSC) can enhance dynamic adaptability, save vehicle travelling time and promote intersection capacity. However, the existing RLTSC methods do not consider the driver's response time requirement, so the systems often face efficiency limitations and implementation difficulties. We propose the advance decision-making reinforcement learning traffic signal control (AD-RLTSC) algorithm to improve traffic efficiency while ensuring safety in mixed traffic environment. First, the relationship between the intersection perception range and the signal control period is established and the trust region state (TRS) is proposed. Then, the scalable state matrix is dynamically adjusted to decide the future signal light status. The decision will be displayed to the human-driven vehicles (HDVs) through the bi-countdown timer mechanism and sent to the nearby connected automated vehicles (CAVs) using the wireless network rather than be executed immediately. HDVs and CAVs optimize the driving speed based on the remaining green (or red) time. Besides, the Double Dueling Deep Q-learning Network algorithm is used for reinforcement learning training; a standardized reward is proposed to enhance the performance of intersection control and prioritized experience replay is adopted to improve sample utilization. The experimental results on vehicle micro-behaviour and traffic macro-efficiency showed that the proposed AD-RLTSC algorithm can simultaneously improve both traffic efficiency and traffic flow stability.