Trinh Tuan Hung, Nguyen Van Bac, Tran Quang Duy
Keywords
Reinforcement learning, Double Deep Q network, Heterogeneous traffic conditions, Simulation, Motorcycle
Abstract
The rapid pace of urbanization and increasing travel demand have placed significant strain on transportation systems, particularly in densely populated urban areas. A key challenge is congestion at intersections, where high vehicle volumes often lead to long queues, extended travel times, increased environmental pollution, and economic inefficiencies. Traditional fixed-time signal control (FTSC) is insufficient to address these dynamic traffic patterns. In response, Reinforcement Learning (RL) has emerged as a promising approach for adaptive traffic signal control. This paper presents a Double Deep Q-Network (DDQN)-based model for optimizing traffic light control at a single urban intersection. DDQN is a significant advancement in RL by combining the function approximation capability of deep neural networks (DNNs) with the Double Q-learning framework, effectively reducing overestimation bias and improving the stability of value-based learning. This integration enables RL to scale to complex real-world tasks such as traffic light control, where the state space is too large to represent explicitly with a table. The proposed DDQN model enhances traffic performance by dynamically adjusting signal phases under heterogeneous traffic conditions. Experimental results demonstrate the potential of the DDQN approach to enhance traffic speed and reduce congestion compared to conventional strategies.
Downloads
Download data is not yet available.
106
Abstract Views
106
PDF Downloads