2010-12-01
Double Q-learning
Publication
Publication
Presented at the
Annual Conference on Advances in Neural Information Processing Systems (December 2010), Vancouver, B.C., Canada
In some stochastic environments the well-known reinforcement learning algorithm Q-learning performs very poorly. This poor performance is caused by large overestimations of action values, which result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value. We introduce an alternative way to approximate the maximum expected value for any set of random variables. The obtained double estimator method is shown to sometimes underestimate rather than overestimate the maximum expected value. We apply the double estimator to Q-learning to construct Double Q-learning, a new off-policy reinforcement learning algorithm. We show the new algorithm converges to the optimal policy and that it performs well in some settings in which Q-learning performs poorly due to its overestimation.
Additional Metadata | |
---|---|
, , , | |
, | |
, , | |
The MIT Press | |
Advances in Neural Information Processing Systems | |
Annual Conference on Advances in Neural Information Processing Systems | |
Organisation | Intelligent and autonomous systems |
van Hasselt, H. (2010). Double Q-learning. In Advances in Neural Information Processing Systems. The MIT Press. |