language-icon Old Web
English
Sign In

Negative binomial distribution

In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of failures (denoted r) occurs. For example, if we define a 1 as failure, all non-1s as successes, and we throw a die repeatedly until 1 appears the third time (r = three failures), then the probability distribution of the number of non-1s that appeared will be a negative binomial distribution.(using equivalent binomial)(simplified using: n = k + r { extstyle n=k+r} ) In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of failures (denoted r) occurs. For example, if we define a 1 as failure, all non-1s as successes, and we throw a die repeatedly until 1 appears the third time (r = three failures), then the probability distribution of the number of non-1s that appeared will be a negative binomial distribution. The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial distribution. A convention among engineers, climatologists, and others is to use 'negative binomial' or 'Pascal' for the case of an integer-valued stopping-time parameter r, and use 'Polya' for the real-valued case. For occurrences of 'contagious' discrete events, like tornado outbreaks, the Polya distributions can be used to give more accurate models than the Poisson distribution by allowing the mean and variance to be different, unlike the Poisson. 'Contagious' events have positively correlated occurrences causing a larger variance than if the occurrences were independent, due to a positive covariance term. Suppose there is a sequence of independent Bernoulli trials. Thus, each trial has two potential outcomes called 'success' and 'failure'. In each trial the probability of success is p and of failure is (1 − p). We are observing this sequence until a predefined number r of failures have occurred. Then the random number of successes we have seen, X, will have the negative binomial (or Pascal) distribution: When applied to real-world problems, outcomes of success and failure may or may not be outcomes we ordinarily view as good and bad, respectively. Suppose we used the negative binomial distribution to model the number of days a certain machine works before it breaks down. In this case 'success' would be the result on a day when the machine worked properly, whereas a breakdown would be a 'failure'. If we used the negative binomial distribution to model the number of goal attempts an athlete makes before scoring r goals, though, then each unsuccessful attempt would be a 'success', and scoring a goal would be 'failure'. If we are tossing a coin, then the negative binomial distribution can give the number of tails ('success') we are likely to encounter before we encounter a certain number of heads ('success'). In the probability mass function below, p is the probability of success, and (1 − p) is the probability of failure. The probability mass function of the negative binomial distribution is where k is the number of successes, r is the number of failures, and p is the probability of success. Here the quantity in parentheses is the binomial coefficient, and is equal to

[ "Poisson distribution", "Statistics", "Econometrics", "Binomial test", "poisson gamma", "Binomial proportion confidence interval", "Negative multinomial distribution", "Overdispersion" ]
Parent Topic
Child Topic
    No Parent Topic