In Neural networks : the official journal of the International Neural Network Society
Deep reinforcement learning (DRL) breaks through the bottlenecks of traditional reinforcement learning (RL) with the help of the perception capability of deep learning and has been widely applied in real-world problems. While model-free RL, as a class of efficient DRL methods, performs the learning of state representations simultaneously with policy learning in an end-to-end manner when facing large-scale continuous state and action spaces. However, training such a large policy model requires a large number of trajectory samples and training time. On the other hand, the learned policy often fails to generalize to large-scale action spaces, especially for the continuous action spaces. To address this issue, in this paper we propose an efficient policy learning method in latent state and action spaces. More specifically, we extend the idea of state representations to action representations for better policy generalization capability. Meanwhile, we divide the whole learning task into learning with the large-scale representation models in an unsupervised manner and learning with the small-scale policy model in the RL manner. The small policy model facilitates policy learning, while not sacrificing generalization and expressiveness via the large representation model. Finally, the effectiveness of the proposed method is demonstrated by MountainCar, CarRacing and Cheetah experiments.
Zhao Tingting, Wang Ying, Sun Wei, Chen Yarui, Niu Gang, Sugiyama Masashi
2022-Dec-16
Action representations, Continuous action spaces, Model-free reinforcement learning, Policy model, State representations