Average optimal strategies in stochastic games with complete information

1993 
In this contribution Markov games are considered in which the first, player knows the current state but the second player knows the current state and the current action of the first player. Such Markov games are called Markov games with complete information or minimax decision models. By means of a Bellman equation a sufficient condition for the average optimality of a stationary deterministic strategy is given. Furthermore, Howard’s strategy improvement known for Markov decision models is generalized to Markov games.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []