language-icon Old Web
English
Sign In

Chebyshev's inequality

In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean (or equivalently, at least 1 − 1/k2 of the distribution's values are within k standard deviations of the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers. In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values can be more than a certain distance from the mean. Specifically, no more than 1/k2 of the distribution's values can be more than k standard deviations away from the mean (or equivalently, at least 1 − 1/k2 of the distribution's values are within k standard deviations of the mean). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers. In practical usage, in contrast to the 68–95–99.7 rule, which applies to normal distributions, Chebyshev's inequality is weaker, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 89% within three standard deviations. The term Chebyshev's inequality may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as 'Chebyshev's First Inequality,' and the similar one referred to on this page as 'Chebyshev's Second Inequality.' The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé.:98 The theorem was first stated without proof by Bienaymé in 1853 and later proved by Chebyshev in 1867. His student Andrey Markov provided another proof in his 1884 Ph.D. thesis. Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces. Let X (integrable) be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number k > 0, Only the case k > 1 {displaystyle k>1} is useful. When k ≤ 1 {displaystyle kleq 1} the right-hand side 1 k 2 ≥ 1 {displaystyle {frac {1}{k^{2}}}geq 1} and the inequality is trivial as all probabilities are ≤ 1. As an example, using k = 2 {displaystyle k={sqrt {2}}} shows that the probability that values lie outside the interval ( μ − 2 σ , μ + 2 σ ) {displaystyle (mu -{sqrt {2}}sigma ,mu +{sqrt {2}}sigma )} does not exceed 1 2 {displaystyle {frac {1}{2}}} .

[ "Hölder's inequality", "Rearrangement inequality", "Kantorovich inequality" ]
Parent Topic
Child Topic
    No Parent Topic