ArXiv Preprint
Reinforcement learning (RL) is a powerful machine learning technique that
enables an intelligent agent to learn an optimal policy that maximizes the
cumulative rewards in sequential decision making. Most of methods in the
existing literature are developed in \textit{online} settings where the data
are easy to collect or simulate. Motivated by high stake domains such as mobile
health studies with limited and pre-collected data, in this paper, we study
\textit{offline} reinforcement learning methods. To efficiently use these
datasets for policy optimization, we propose a novel value enhancement method
to improve the performance of a given initial policy computed by existing
state-of-the-art RL algorithms. Specifically, when the initial policy is not
consistent, our method will output a policy whose value is no worse and often
better than that of the initial policy. When the initial policy is consistent,
under some mild conditions, our method will yield a policy whose value
converges to the optimal one at a faster rate than the initial policy,
achieving the desired ``value enhancement" property. The proposed method is
generally applicable to any parametrized policy that belongs to certain
pre-specified function class (e.g., deep neural networks). Extensive numerical
studies are conducted to demonstrate the superior performance of our method.
Chengchun Shi, Zhengling Qi, Jianing Wang, Fan Zhou
2023-01-05