2012-04-01
Reinforcement learning in continuous state and action spaces
Publication
Publication
Many traditional reinforcement-learning algorithms have been designed for problems with small finite state and action spaces. Learning in such discrete problems can been difficult, due to noise and delayed reinforcements. However, many real-world problems have continuous state or action spaces, which can make learning a good decision policy even more involved. In this chapter we discuss how to automatically find good decision policies in continuous domains. Because analytically computing a good policy from a continuous model can be infeasible, in this chapter we mainly focus on methods that explicitly update a representation of a value function, a policy or both. We discuss considerations in choosing an appropriate representation for these functions and discuss gradient-based and gradient-free ways to update the parameters. We show how to apply these methods to reinforcement-learning problems and discuss many specific algorithms. Amongst others, we cover gradient-based temporal-difference learning, evolutionary strategies, policy-gradient algorithms and actor-critic methods. We discuss the advantages of different approaches and compare the performance of a state-of-the-art actor-critic method and a state-of-the-art evolutionary strategy empirically.
Additional Metadata | |
---|---|
, , , , , , , | |
, | |
, , | |
Springer Berlin Heidelberg | |
M.A. Wiering , M. van Otterlo | |
Organisation | Intelligent and autonomous systems |
van Hasselt, H. (2012). Reinforcement learning in continuous state and action spaces. In M. A. Wiering & M. van Otterlo (Eds.), Reinforcement Learning: State of the Art (pp. 207–251). Springer Berlin Heidelberg. |