Jump to content

Actor-critic algorithm

From Wikipedia, the free encyclopedia

The actor-critic algorithm (AC) is a family of reinforcement learning (RL) algorithms that combine policy-based RL algorithms such as policy gradient methods, and value-based RL algorithms such as value iteration, Q-learning, SARSA, and TD learning.[1]

An AC algorithm consists of two main components: an "actor" that determines which actions to take according to a policy function, and a "critic" that evaluates those actions according to a value function.[2] Some AC algorithms are on-policy, some are off-policy. Some apply to either continuous or discrete action spaces. Some work in both cases.

Overview

[edit]

The actor-critic methods can be understood as an improvement over pure policy gradient methods like REINFORCE via introducing a baseline.

Actor

[edit]

The actor uses a policy function , while the critic estimates either the value function , the action-value Q-function , the advantage function , or any combination thereof.

The actor is a parameterized function , where are the parameters of the actor. The actor takes as argument the state of the environment and produces a probability distribution .

If the action space is discrete, then . If the action space is continuous, then .

The goal of policy optimization is to improve the actor. That is, to find some that maximizes the expected episodic reward :where is the discount factor, is the reward at step , and is the time-horizon (which can be infinite).

The goal of policy gradient method is to optimize by gradient ascent on the policy gradient .

As detailed on the policy gradient method page, there are many unbiased estimators of the policy gradient:where is a linear sum of the following:

  • .
  • : the REINFORCE algorithm.
  • : the REINFORCE with baseline algorithm. Here is an arbitrary function.
  • : TD(1) learning.
  • .
  • : Advantage Actor-Critic (A2C).[3]
  • : TD(2) learning.
  • : TD(n) learning.
  • : TD(λ) learning, also known as GAE (generalized advantage estimate).[4] This is obtained by an exponentially decaying sum of the TD(n) learning terms.

Critic

[edit]

In the unbiased estimators given above, certain functions such as appear. These are approximated by the critic. Since these functions all depend on the actor, the critic must learn alongside the actor. The critic is learned by value-based RL algorithms.

For example, if the critic is estimating the state-value function , then it can be learned by any value function approximation method. Let the critic be a function approximator with parameters .

The simplest example is TD(1) learning, which trains the critic to minimize the TD(1) error:The critic parameters are updated by gradient descent on the squared TD error:where is the learning rate. Note that the gradient is taken with respect to the in only, since the in constitutes a moving target, and the gradient is not taken with respect to that. This is a common source of error in implementations that use automatic differentiation, and requires "stopping the gradient" at that point.

Similarly, if the critic is estimating the action-value function , then it can be learned by Q-learning or SARSA. In SARSA, the critic maintains an estimate of the Q-function, parameterized by , denoted as . The temporal difference error is then calculated as . The critic is then updated byThe advantage critic can be trained by training both a Q-function and a state-value function , then let . Although, it is more common to train just a state-value function , then estimate the advantage by[3]Here, is a positive integer. The higher is, the more lower is the bias in the advantage estimation, but at the price of higher variance.

The Generalized Advantage Estimation (GAE) introduces a hyperparameter that smoothly interpolates between Monte Carlo returns (, high variance, no bias) and 1-step TD learning (, low variance, high bias). This hyperparameter can be adjusted to pick the optimal bias-variance trade-off in advantage estimation. It uses an exponentially decaying average of n-step returns with being the decay strength.[4]

Variants

[edit]
  • Asynchronous Advantage Actor-Critic (A3C): Parallel and asynchronous version of A2C.[3]
  • Soft Actor-Critic (SAC): Incorporates entropy maximization for improved exploration.[5]
  • Deep Deterministic Policy Gradient (DDPG): Specialized for continuous action spaces.[6]

See also

[edit]

References

[edit]
  1. ^ Arulkumaran, Kai; Deisenroth, Marc Peter; Brundage, Miles; Bharath, Anil Anthony (November 2017). "Deep Reinforcement Learning: A Brief Survey". IEEE Signal Processing Magazine. 34 (6): 26–38. arXiv:1708.05866. Bibcode:2017ISPM...34...26A. doi:10.1109/MSP.2017.2743240. ISSN 1053-5888.
  2. ^ Konda, Vijay; Tsitsiklis, John (1999). "Actor-Critic Algorithms". Advances in Neural Information Processing Systems. 12. MIT Press.
  3. ^ a b c Mnih, Volodymyr; Badia, Adrià Puigdomènech; Mirza, Mehdi; Graves, Alex; Lillicrap, Timothy P.; Harley, Tim; Silver, David; Kavukcuoglu, Koray (2016-06-16), Asynchronous Methods for Deep Reinforcement Learning, arXiv:1602.01783
  4. ^ a b Schulman, John; Moritz, Philipp; Levine, Sergey; Jordan, Michael; Abbeel, Pieter (2018-10-20), High-Dimensional Continuous Control Using Generalized Advantage Estimation, arXiv:1506.02438
  5. ^ Haarnoja, Tuomas; Zhou, Aurick; Hartikainen, Kristian; Tucker, George; Ha, Sehoon; Tan, Jie; Kumar, Vikash; Zhu, Henry; Gupta, Abhishek (2019-01-29), Soft Actor-Critic Algorithms and Applications, arXiv:1812.05905
  6. ^ Lillicrap, Timothy P.; Hunt, Jonathan J.; Pritzel, Alexander; Heess, Nicolas; Erez, Tom; Tassa, Yuval; Silver, David; Wierstra, Daan (2019-07-05), Continuous control with deep reinforcement learning, arXiv:1509.02971