language-icon Old Web
English
Sign In

Continuous mapping theorem

In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xn → x then g(xn) → g(x). The continuous mapping theorem states that this will also be true if we replace the deterministic sequence {xn} with a sequence of random variables {Xn}, and replace the standard notion of convergence of real numbers “→” with one of the types of convergence of random variables. In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xn → x then g(xn) → g(x). The continuous mapping theorem states that this will also be true if we replace the deterministic sequence {xn} with a sequence of random variables {Xn}, and replace the standard notion of convergence of real numbers “→” with one of the types of convergence of random variables. This theorem was first proved by Henry Mann and Abraham Wald in 1943, and it is therefore sometimes called the Mann–Wald theorem. Meanwhile Denis Sargan refers to it as the general transformation theorem. Let {Xn}, X be random elements defined on a metric space S. Suppose a function g: S→S′ (where S′ is another metric space) has the set of discontinuity points Dg such that Pr = 0. Then where the superscripts, 'd', 'p', and 'a.s.' denote convergence in distribution, convergence in probability, and almost sure convergence respectively. Spaces S and S′ are equipped with certain metrics. For simplicity we will denote both of these metrics using the |x − y| notation, even though the metrics may be arbitrary and not necessarily Euclidean. We will need a particular statement from the portmanteau theorem: that convergence in distribution X n → d X {displaystyle X_{n}{xrightarrow {d}}X} is equivalent to So it suffices to prove that E f ( g ( X n ) ) → E f ( g ( X ) ) {displaystyle mathbb {E} f(g(X_{n})) o mathbb {E} f(g(X))} for every bounded continuous functional f. Note that F = f ∘ g {displaystyle F=fcirc g} is itself a bounded continuous functional. And so the claim follows from the statement above. Fix an arbitrary ε > 0. Then for any δ > 0 consider the set Bδ defined as

[ "Sum of normally distributed random variables", "Random element", "Convergence of random variables", "Independent and identically distributed random variables", "Slutsky's theorem" ]
Parent Topic
Child Topic
    No Parent Topic