A Hybrid Multi-Task Learning Approach for Optimizing Deep Reinforcement Learning Agents

2021 
Driven by recent technological advancements within the field of artificial intelligence (AI), deep learning (DL) has been emerged as a promising representation learning technique across different machine learning (ML) classes, especially within the reinforcement learning (RL) arena. This new direction has given rise to the evolution of a new technological domain named deep reinforcement learning (DRL) that combines the high representational learning capabilities of DL with existing RL methods. Performance optimization achieved by RL-based intelligent agents designed with model-free-based approaches was majorly limited to systems with RL algorithms focused on learning a single task. The aforementioned approach was found to be quite data inefficient, whenever DRL agents needed to interact with more complex, data-rich environments. This is primarily due to the limited applicability of DRL algorithms to many scenarios across related tasks from the same distribution. One of the possible approaches to mitigate this issue is by adopting the method of multi-task learning. The objective of this research paper is to present a hybrid multi-task learning-oriented approach for the optimization of DRL agents operating within different but semantically similar environments with related tasks. The proposed framework will be built with multiple, individual actor-critic models functioning within independent environments and transferring knowledge among themselves through a global network to optimize performance. The empirical results obtained by the hybrid multi-task learning model on OpenAI Gym based Atari 2600 video gaming environment demonstrates that the proposed model enhances the performance of the DRL agent relatively in the range of 15% to 20% margin.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    0
    Citations
    NaN
    KQI
    []