A Deep Reinforcement Learning Framework for Eco-driving in Connected and Automated Hybrid Electric Vehicles.

2021 
Connected and Automated Vehicles (CAVs), in particular those with multiple power sources, have the potential to significantly reduce fuel consumption and travel time in real-world driving conditions. In particular, the Eco-driving problem seeks to design optimal speed and power usage profiles based upon look-ahead information from connectivity and advanced mapping features, to minimize the fuel consumption over a given itinerary. Due to the complexity of the problem and the limited on-board computational capability, the real-time implementation of many existing methods that rely on online trajectory optimization becomes infeasible. In this work, the Eco-driving problem is formulated as a Partially Observable Markov Decision Process (POMDP), which is then solved with a state-of-art Deep Reinforcement Learning (DRL) Actor Critic algorithm, Proximal Policy Optimization. An Eco-driving simulation environment is developed for training and testing purposes. To benchmark the performance of the DRL controller, a baseline controller representing the human driver and the wait-and-see deterministic optimal solution are presented. With minimal on-board computational requirement and comparable travel time, the DRL controller reduces the fuel consumption by more than 17% by modulating the vehicle velocity over the route and performing energy-efficient approach and departure at signalized intersections when compared against a baseline controller.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    3
    Citations
    NaN
    KQI
    []