language-icon Old Web
English
Sign In

Generalized Dirichlet distribution

In statistics, the generalized Dirichlet distribution (GD) is a generalization of the Dirichlet distribution with a more general covariance structure and almost twice the number of parameters. Random variables with a GD distribution are not completely neutral . In statistics, the generalized Dirichlet distribution (GD) is a generalization of the Dirichlet distribution with a more general covariance structure and almost twice the number of parameters. Random variables with a GD distribution are not completely neutral . The density function of p 1 , … , p k − 1 {displaystyle p_{1},ldots ,p_{k-1}} is where we define p k = 1 − ∑ i = 1 k − 1 p i {displaystyle p_{k}=1-sum _{i=1}^{k-1}p_{i}} . Here B ( x , y ) {displaystyle B(x,y)} denotes the Beta function. This reduces to the standard Dirichlet distribution if b i − 1 = a i + b i {displaystyle b_{i-1}=a_{i}+b_{i}} for 2 ⩽ i ⩽ k − 1 {displaystyle 2leqslant ileqslant k-1} ( b 0 {displaystyle b_{0}} is arbitrary). For example, if k=4, then the density function of p 1 , p 2 , p 3 {displaystyle p_{1},p_{2},p_{3}} is where p 1 + p 2 + p 3 < 1 {displaystyle p_{1}+p_{2}+p_{3}<1} and p 4 = 1 − p 1 − p 2 − p 3 {displaystyle p_{4}=1-p_{1}-p_{2}-p_{3}} . Connor and Mosimann define the PDF as they did for the following reason. Define random variables z 1 , … , z k − 1 {displaystyle z_{1},ldots ,z_{k-1}} with z 1 = p 1 , z 2 = p 2 / ( 1 − p 1 ) , z 3 = p 3 / ( 1 − ( p 1 + p 2 ) ) , … , z i = p i / ( 1 − ( p 1 + ⋯ + p i − 1 ) ) {displaystyle z_{1}=p_{1},z_{2}=p_{2}/left(1-p_{1} ight),z_{3}=p_{3}/left(1-(p_{1}+p_{2}) ight),ldots ,z_{i}=p_{i}/left(1-left(p_{1}+cdots +p_{i-1} ight) ight)} . Then p 1 , … , p k {displaystyle p_{1},ldots ,p_{k}} have the generalized Dirichlet distribution as parametrized above, if the z i {displaystyle z_{i}} are independent beta with parameters a i , b i {displaystyle a_{i},b_{i}} , i = 1 , … , k − 1 {displaystyle i=1,ldots ,k-1} . Wong gives the slightly more concise form for x 1 + ⋯ + x k ⩽ 1 {displaystyle x_{1}+cdots +x_{k}leqslant 1} where γ j = β j − α j + 1 − β j + 1 {displaystyle gamma _{j}=eta _{j}-alpha _{j+1}-eta _{j+1}} for 1 ⩽ j ⩽ k − 1 {displaystyle 1leqslant jleqslant k-1} and γ k = β k − 1 {displaystyle gamma _{k}=eta _{k}-1} . Note that Wong defines a distribution over a k {displaystyle k} dimensional space (implicitly defining x k + 1 = 1 − ∑ i = 1 k x i {displaystyle x_{k+1}=1-sum _{i=1}^{k}x_{i}} ) while Connor and Mosiman use a k − 1 {displaystyle k-1} dimensional space with x k = 1 − ∑ i = 1 k − 1 x i {displaystyle x_{k}=1-sum _{i=1}^{k-1}x_{i}} . If X = ( X 1 , … , X k ) ∼ G D k ( α 1 , … , α k ; β 1 , … , β k ) {displaystyle X=left(X_{1},ldots ,X_{k} ight)sim GD_{k}left(alpha _{1},ldots ,alpha _{k};eta _{1},ldots ,eta _{k} ight)} , then

[ "Dirichlet's energy", "Dirichlet's principle", "Dirichlet series" ]
Parent Topic
Child Topic
    No Parent Topic