Jump to content

One-shot deviation principle

From Wikipedia, the free encyclopedia
(Redirected from One-step deviation property)

The one-shot deviation principle (also known as single-deviation property[1]) is the principle of optimality of dynamic programming applied to game theory.[2] It says that a strategy profile of a finite multi-stage extensive-form game with observed actions is a subgame perfect equilibrium (SPE) if and only if there exist no profitable single deviation for each subgame and every player.[1][3] In simpler terms, if no player can increase their expected payoff by deviating from their original strategy via a single action (in just one stage of the game), then the strategy profile is an SPE. In other words, no player can profit by deviating from the strategy in one period and then reverting to the strategy.

Furthermore, the one-shot deviation principle is very important for infinite horizon games, in which the principle typically does not hold,[4] since it is not plausible to consider an infinite number of strategies and payoffs in order to solve. In an infinite horizon game where the discount factor is less than 1, a strategy profile is a subgame perfect equilibrium if and only if it satisfies the one-shot deviation principle.[5]

Definitions

[edit]

The following is the paraphrased definition from Watson (2013).[1]

To check whether strategy s is a subgame perfect Nash equilibrium, we have to ask every player i and every subgame, if considering s, there is a strategy s’ that yields a strictly higher payoff for player i than does s in the subgame. In a finite multi-stage game with observed actions, this analysis is equivalent to looking at single deviations from s, meaning s’ differs from s at only one information set (in a single stage). Note that the choices associated with s and s’ are the same at all nodes that are successors of nodes in the information set where s and s’ prescribe different actions.

Example

[edit]

Consider a symmetric game with two players in which each player makes binary choice decisions, A or B, in each of three stages. In each stage, the players observe the choices made in the previous stages (if any). Note that each player has 21 information sets, one in the first stage, four in the second stage (because players observe the outcome of the first stage, one of four action combinations), and 16 in the third stage (4 times 4 histories of action combinations from the first two stages). The single-deviation condition requires checking each of these information sets, asking in each case whether the expected payoff of the player on the move would strictly increase by deviating at only this information set.

References

[edit]
  1. ^ a b c Watson, Joel (2013). Strategy: An Introduction to Game Theory. New York: W. W. Norton & Company. p. 194. ISBN 978-0393123876.
  2. ^ Blackwell, David (1965). "Discounting Dynamic Programming". Annals of Mathematical Statistics. 36: 226–235. doi:10.1214/aoms/1177700285.
  3. ^ Tirole, Jean; Fudenberg, Drew (1991). Game theory (6. printing. ed.). Cambridge, Mass. [u.a.]: MIT Press. ISBN 978-0-262-06141-4.
  4. ^ Obara, I. (2012). Subgame Perfect Equilibrium [PDF document]. Slide 13. Retrieved from http://www.econ.ucla.edu/iobara/SPE201B.pdf
  5. ^ Ozdaglar, A. (2010). Repeated Games [PDF document]. Slide 13. Retrieved from https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-254-game-theory-with-engineering-applications-spring-2010/lecture-notes/MIT6_254S10_lec15.pdf