language-icon Old Web
English
Sign In

Binomial distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 − p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.In general, if the random variable X follows the binomial distribution with parameters n ∈ ℕ and p ∈ , we write X ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:Suppose a biased coin comes up heads with probability 0.3 when tossed. What is the probability of achieving 0, 1,..., 6 heads after six tosses?If X ~ B(n, p), that is, X is a binomially distributed random variable, n being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of X is:The variance is:Usually the mode of a binomial B(n, p) distribution is equal to ⌊ ( n + 1 ) p ⌋ {displaystyle lfloor (n+1)p floor }  , where ⌊ ⋅ ⌋ {displaystyle lfloor cdot floor }   is the floor function. However, when (n + 1)p is an integer and p is neither 0 nor 1, then the distribution has two modes: (n + 1)p and (n + 1)p − 1. When p is equal to 0 or 1, the mode will be 0 and n correspondingly. These cases can be summarized as follows:In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However several special results have been established:If two binomially distributed random variables X and Y are observed together, estimating their covariance can be useful. The covariance isIf X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables with the same probability p, then X + Y is again a binomial variable; its distribution is Z=X+Y ~ B(n+m, p):Since X ∼ B ( n , p ) {displaystyle Xsim B(n,p)}   and Y ∼ B ( X , q ) {displaystyle Ysim B(X,q)}  , by the law of total probability,The rule n p ± 3 n p ( 1 − p ) ∈ ( 0 , n ) {displaystyle nppm 3{sqrt {np(1-p)}}in (0,n)}   is totally equivalent to request thatAssume that both values n p {displaystyle np}   and n ( 1 − p ) {displaystyle n(1-p)}   are greater than 9. Since 0 < p < 1 {displaystyle 0<p<1}  , we easily have that Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed.Methods for random number generation where the marginal distribution is a binomial distribution are well-established.For k ≤ np, upper bounds for the lower tail of the distribution function can be derived. Recall that F ( k ; n , p ) = Pr ( X ≤ k ) {displaystyle F(k;n,p)=Pr(Xleq k)}  , the probability that there are at most k successes.This distribution was derived by James Bernoulli. He considered the case where p = r/(r + s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2.

[ "Applied mathematics", "Statistics", "Econometrics", "Beta-binomial distribution", "Binomial series", "Gaussian binomial coefficient", "Binomial test", "Binomial proportion confidence interval" ]
Parent Topic
Child Topic
    No Parent Topic