Trust Region Policy Optimization

来源:互联网 发布:好吃的白巧克力 知乎 编辑:程序博客网 时间:2024/05/22 13:33

https://arxiv.org/abs/1502.05477

Trust Region Policy Optimization

John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
Comments:16 pages, ICML 2015Subjects:Learning (cs.LG)Cite as:arXiv:1502.05477 [cs.LG] (or arXiv:1502.05477v5 [cs.LG] for this version)

Submission history

From: John Schulman [view email] 
[v1] Thu, 19 Feb 2015 06:44:25 GMT (547kb,D)
[v2] Mon, 18 May 2015 14:56:50 GMT (540kb,D)
[v3] Mon, 8 Jun 2015 10:47:03 GMT (540kb,D)
[v4] Mon, 6 Jun 2016 01:00:57 GMT (541kb,D)
[v5] Thu, 20 Apr 2017 18:04:12 GMT (541kb,D)

0 0
原创粉丝点击