Optimization of the Model Predictive Control Update Interval Using Reinforcement Learning

2021 
Abstract In control applications there is often a compromise that needs to be made with respect to the complexity and performance of the controller, and the computational resources that are available. For instance, the typical hardware platform in embedded control applications is a microcontroller with limited memory and processing power, and for battery powered applications the control system can account for a significant portion of the energy consumption. We propose a controller architecture in which the computational cost is explicitly optimized along with the control objective. This is achieved by a three-part architecture where a high-level, computationally expensive controller generates plans, which a computationally simpler controller executes by compensating for prediction errors, while a recomputation policy decides when the plan should be recomputed. In this paper, we employ model predictive control (MPC) as the high-level plan-generating controller, a linear state feedback controller as the simpler compensating controller, and reinforcement learning (RL) to learn the recomputation policy. Simulation results for the classic control task of balancing an inverted pendulum show that not only is the total processor time reduced by 60% — the RL policy is even able to uncover a non-trivial synergistic relationship between the MPC and the state feedback controller - improving the control performance by 20% over the MPC alone.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    1
    Citations
    NaN
    KQI
    []