Connected automated vehicle cooperative control with a deep reinforcement learning approach in a mixed traffic environment

2021 
Abstract This paper proposes a cooperative strategy of connected and automated vehicles (CAVs) longitudinal control for a mixed connected and automated traffic environment based on deep reinforcement learning (DRL) algorithm, which enhances the string stability of mixed traffic, car following efficiency, and energy efficiency. Since the sequences of mixed traffic are combinatory, to reduce the training dimension and alleviate communication burdens, we decomposed mixed traffic into multiple subsystems where each subsystem is comprised of human-driven vehicles (HDV) followed by cooperative CAVs. Based on that, a cooperative CAV control strategy is developed based on a deep reinforcement learning algorithm, enabling CAVs to learn the leading HDV’s characteristics and make longitudinal control decisions cooperatively to improve the performance of each subsystem locally and consequently enhance performance for the whole mixed traffic flow. For training, a distributed proximal policy optimization is applied to ensure the training convergence of the proposed DRL. To verify the effectiveness of the proposed method, simulated experiments are conducted, which shows the performance of our proposed model has a great generalization capability of dampening oscillations, fulfilling the car following and energy-saving tasks efficiently under different penetration rates and various leading HDVs behaviors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    0
    Citations
    NaN
    KQI
    []