Automatic Convergence Estimation by Utilizing Fractal Dimensional Analysis for Reinforcement Learning

Abstract

This paper presents an automatic convergence estimation method for the reinforcement learning (RL) of autonomous agents. Recently, a multi-agent robot system (MARS) that uses an RL algorithm has been studied in real-world situations. This system can obtain behaviors autonomously through multi-agent reinforcement learning (MARL). However, MARL takes a long time to obtain an optimal or near-optimal solution. Furthermore, the agents continued learning after obtaining of the solution may lead to overfitting. It is difficult for the MARL operator to determine whether the learning has been converged or not, because the operator has to determine the convergence of all learning agents. The convergence of the learning curve indicates that knowledge has been obtained. However, judging the convergence of RL depends on human intuition and experience, and few studies have discussed convergence estimation methods. Therefore, in this paper, we address development of an automatic convergence estimation method that does not require human judgement. In prior work, we proposed automatic convergence estimation method using fractal dimensional analysis which evaluates the learning curve of the learner as agents, and we confirmed the effectiveness of our previous method by conducting a computer simulation. In this study, we propose a method based on fractal dimensional analysis considering implementation to the robots and evaluate the effectiveness of the method by using an expanded experimental setup that considering a computer simulation and an actual mobile robot environment.

Publication
Journal of Instrumentation, Automation and Systems, 3 (3)
Yusuke Tamura
Yusuke Tamura
Associate Professor, PhD