Flexible control of Discrete Event Systems using environment simulation and Reinforcement Learning

2021 
Abstract Discrete Event Systems ( DESs ) are classically modeled as Finite State Machines ( FSMs ), and controlled in a maximally permissive, controllable, and nonblocking way using Supervisory Control Theory ( SCT ). While SCT is powerful to orchestrate events of DESs , it fail to process events whose control is based on probabilistic assumptions. In this research, we show that some events can be approached as usual in SCT , while others can be processed using Artificial Intelligence. We present a tool to convert SCT controllers into Reinforcement Learning ( RL ) simulation environments, from where they become suitable for intelligent processing. Then, we propose a RL -based approach that recognizes the context under which the selected set of stochastic events occur, and treats them accordingly, aiming to find suitable decision making as complement to deterministic outcomes of the SCT . The result is an efficient combination of safe and flexible control, which tends to maximize performance for a class of DES that evolves probabilistically. Two RL algorithms are tested, State–Action–Reward–State–Action (SARSA) and N-step SARSA, over a flexible automotive plant control. Results suggest a performance improvement 9 times higher when using the proposed combination in comparison with non-intelligent decisions.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []