language-icon Old Web
English
Sign In

Information dimension

In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959. v = p P X d + q P X c + r P X s {displaystyle v=pP_{Xd}+qP_{Xc}+rP_{Xs}} v = ( 1 − ρ ) P X d + ρ P X c {displaystyle v=(1- ho )P_{Xd}+ ho P_{Xc}} d ( X ) = ρ {displaystyle d(X)= ho } H ρ ( X ) = ( 1 − ρ ) H 0 ( P X d ) + ρ h ( P X c ) + H 0 ( ρ ) {displaystyle mathbb {H} _{ ho }(X)=(1- ho )mathbb {H} _{0}(P_{Xd})+ ho h(P_{Xc})+mathbb {H} _{0}( ho )} H 0 ( ρ ) = ρ log 2 ⁡ 1 ρ + ( 1 − ρ ) log 2 ⁡ 1 1 − ρ {displaystyle mathbb {H} _{0}( ho )= ho log _{2}{frac {1}{ ho }}+(1- ho )log _{2}{frac {1}{1- ho }}} f ( x ) = { x , if  x ≥ 0 0 , x < 0 {displaystyle f(x)={egin{cases}x,&{ ext{if }}xgeq 0\0,&x<0end{cases}}} d ( X ) = ρ = 0.5 {displaystyle d(X)= ho =0.5} H 0.5 ( X ) = ( 1 − 0.5 ) ( 1 log 2 ⁡ 1 ) + 0.5 h ( P X c ) + H 0 ( 0.5 ) = 0 + 1 2 ( 1 2 log 2 ⁡ ( 2 π e σ 2 ) − 1 ) + 1 = 1 4 log 2 ⁡ ( 2 π e σ 2 ) + 1 2  bit(s) {displaystyle {egin{aligned}mathbb {H} _{0.5}(X)&=(1-0.5)(1log _{2}1)+0.5h(P_{Xc})+mathbb {H} _{0}(0.5)\&=0+{frac {1}{2}}({frac {1}{2}}log _{2}(2pi esigma ^{2})-1)+1\&={frac {1}{4}}log _{2}(2pi esigma ^{2})+{frac {1}{2}},{ ext{ bit(s)}}end{aligned}}} In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959. Simply speaking, it is a measure of the fractal dimension of a probability distribution. It characterizes the growth rate of the Shannon entropy given by successively finer discretizations of the space.

[ "Fractal dimension" ]
Parent Topic
Child Topic
    No Parent Topic