language-icon Old Web
English
Sign In

Lumping in Markov Reward Processes

2021 
Explicit conditions on the transition probabilities for lumping in a discrete-time Markov chain (DTMC) are well known and were given by Kemeny and Snell in 1960. They distinguish between “strong” lumpability for which the process is lumpable for any initial probability distribution on the states and “weak” lumpability for which the process is lumpable only for some initial probability distributions. This chapter obtains conditions for lumping in a continuous-time Markov reward process. It introduces the notion of “proportional dynamics” and gives necessary and sufficient conditions for it to hold. The chapter shows that proportional dynamics for a given measure is sufficient for weak lumpability for the same measure, it also implies unlumpability. It discusses the measures, such as the transient probabilities, the distribution of accumulated reward, the expected accumulated reward, and the instantaneous reward.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []