language-icon Old Web
English
Sign In

Joint entropy

In information theory, joint entropy is a measure of the uncertainty associated with a set of variables. H ( X , Y ) = − ∑ x ∈ X ∑ y ∈ Y P ( x , y ) log 2 ⁡ [ P ( x , y ) ] {displaystyle mathrm {H} (X,Y)=-sum _{xin {mathcal {X}}}sum _{yin {mathcal {Y}}}P(x,y)log _{2}}     (Eq.1) H ( X 1 , . . . , X n ) = − ∑ x 1 ∈ X 1 . . . ∑ x n ∈ X n P ( x 1 , . . . , x n ) log 2 ⁡ [ P ( x 1 , . . . , x n ) ] {displaystyle mathrm {H} (X_{1},...,X_{n})=-sum _{x_{1}in {mathcal {X}}_{1}}...sum _{x_{n}in {mathcal {X}}_{n}}P(x_{1},...,x_{n})log _{2}}     (Eq.2) h ( X , Y ) = − ∫ X , Y f ( x , y ) log ⁡ f ( x , y ) d x d y {displaystyle h(X,Y)=-int _{{mathcal {X}},{mathcal {Y}}}f(x,y)log f(x,y),dxdy}     (Eq.3) h ( X 1 , … , X n ) = − ∫ f ( x 1 , … , x n ) log ⁡ f ( x 1 , … , x n ) d x 1 … d x n {displaystyle h(X_{1},ldots ,X_{n})=-int f(x_{1},ldots ,x_{n})log f(x_{1},ldots ,x_{n}),dx_{1}ldots dx_{n}}     (Eq.4) In information theory, joint entropy is a measure of the uncertainty associated with a set of variables. The joint Shannon entropy (in bits) of two discrete random variables X {displaystyle X} and Y {displaystyle Y} with images X {displaystyle {mathcal {X}}} and Y {displaystyle {mathcal {Y}}} is defined as:16 where x {displaystyle x} and y {displaystyle y} are particular values of X {displaystyle X} and Y {displaystyle Y} , respectively, P ( x , y ) {displaystyle P(x,y)} is the joint probability of these values occurring together, and P ( x , y ) log 2 ⁡ [ P ( x , y ) ] {displaystyle P(x,y)log _{2}} is defined to be 0 if P ( x , y ) = 0 {displaystyle P(x,y)=0} . For more than two random variables X 1 , . . . , X n {displaystyle X_{1},...,X_{n}} this expands to where x 1 , . . . , x n {displaystyle x_{1},...,x_{n}} are particular values of X 1 , . . . , X n {displaystyle X_{1},...,X_{n}} , respectively, P ( x 1 , . . . , x n ) {displaystyle P(x_{1},...,x_{n})} is the probability of these values occurring together, and P ( x 1 , . . . , x n ) log 2 ⁡ [ P ( x 1 , . . . , x n ) ] {displaystyle P(x_{1},...,x_{n})log _{2}} is defined to be 0 if P ( x 1 , . . . , x n ) = 0 {displaystyle P(x_{1},...,x_{n})=0} .

[ "Principle of maximum entropy", "Entropy (information theory)", "Differential entropy", "Uncertainty coefficient", "Information theory and measure theory", "Typical set" ]
Parent Topic
Child Topic
    No Parent Topic