logo
    A note on circulant transition matrices in Markov chains
    10
    Citation
    7
    Reference
    10
    Related Paper
    Citation Trend
    Keywords:
    Stochastic matrix
    Matrix (chemical analysis)
    Stochastic matrix
    Additive Markov chain
    Markov kernel
    Hamiltonian (control theory)
    Intuition
    Continuous-time Markov chain
    Citations (9)
    We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.
    Markov kernel
    Additive Markov chain
    Stochastic matrix
    Absorbing Markov chain
    Continuous-time Markov chain
    Citations (13)
    Markov kernel
    Continuous-time Markov chain
    Stochastic matrix
    Linearization
    Markov chains are widely used discrete time discrete state space stochastic processes. In this chapter we study in sufficient detail the classication of Markov chains which is the first step in analyzing a Markov chain. The basic denitions related to Markov chains are given in the first section. The higher-step transition probabilities and related results are given in Sect. 2. Section 3 deals with generation of realizations (sample paths) of specied length of a Markov chain. This section also contains a brief discussion on the maximum likelihood estimation of the transition probability matrix. The topic of classfiication of the states of a Markov chain, which is essential for the study of long-run behaviour of the Markov chains, is discussed Sects. 4 and 5. An extended discussion and some important results on first passage distributions are given in Sect. 6. Computation of probabilities of absorption into recurring classes, when set of transient states is finite, is also considered in this section. In Section 7, the concept of 'periodicity' of states is discussed in detail. In all the sections, the concepts and results are illustrated with computations using R codes given in Sect. 8. These R programs are useful (i) to obtain higher-step transition probabilities (ii) to obtain a finite dimensional distribution (iii) to generate a realization of specified length of a Markov chain (iv) to compute the maximum likelihood estimate of a transition probability matrix (v) to classify the states as persistent or transient (vi) to compute first passage distributions and (vii) to find the period of various states.
    Stochastic matrix
    Discrete phase-type distribution
    Markov kernel
    Additive Markov chain
    Continuous-time Markov chain
    Section (typography)
    Realization (probability)
    Abstract In this paper we present an introduction to the finite markov chain. Transition probabilities are calculated, as well as a transition probability matrix. We present the necessary fundamentals of probability theory on markov chains for stochastic processes and stochastic modelling in inventory control.
    Stochastic matrix
    Continuous-time Markov chain
    Additive Markov chain
    Markov kernel
    Markov renewal process
    The spectral and high order statistical characteristics of irreducible Markov chains are studied. A Markov chain, which corresponds to a single-class state-space, is completely determined by its transition matrix. The paper explains analytically how the type of probability density function (PDF), describing the Markov chain, is determined by its transition matrix. Furthermore, it is shown that the correlation properties (power spectrum) and the high order spectra (HOS) of the Markov chain are also expressed by means of specific terms of the transition matrix.
    Markov kernel
    Stochastic matrix
    Additive Markov chain
    Continuous-time Markov chain
    Markov renewal process
    Matrix (chemical analysis)
    This paper deals with the computation of invariant measures and stationary expectations for discrete-time Markov chains governed by a block-structured one-step transition probability matrix. The method generalizes in some respect Neuts’ matrix-geometric approach to vector-state Markov chains. The method reveals a strong relationship between Markov chains and matrix continued fractions which can provide valuable information for mastering the growing complexity of real-world applications of large-scale grid systems and multidimensional level-dependent Markov models. The results obtained are extended to continuous-time Markov chains.
    Stochastic matrix
    Markov kernel
    Continuous-time Markov chain
    Matrix (chemical analysis)
    Citations (2)
    Stochastic matrix
    Transition rate matrix
    Chain (unit)
    Stationary distribution
    Continuous-time Markov chain
    Additive Markov chain
    Discrete phase-type distribution
    Summary Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non‐reversible and will lose important properties, like the real‐valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem. Copyright © 2015 John Wiley & Sons, Ltd.
    Additive Markov chain
    Continuous-time Markov chain
    Markov kernel
    Stochastic matrix
    Truncation (statistics)
    Markov renewal process
    Chain (unit)
    Matrix (chemical analysis)
    Citations (8)