Modeling Production Scheduling Problems as Reinforcement Learning Environments based on Discrete-Event Simulation and OpenAI Gym

2021 
Abstract Reinforcement learning (RL) is an emerging research topic in production and logistics, as it offers potentials to solve complex planning and control problems in real time. In recent years, many researchers investigated RL algorithms for solving production scheduling problems. However, most of the related articles reveal only little information about the process of developing and implementing RL applications. Against this background, we present a method for modeling production scheduling problems as RL environments. More specifically, we propose the application of Discrete-Event Simulation for modeling production scheduling problems as an interoperable environments and the Gym interface of the OpenAI foundation to allow a simple integration of pre-built RL algorithms from OpenAI Baselines and Stable Baselines. We support our explanations with a simple example of a job shop scheduling problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []