language-icon Old Web
English
Sign In

Channel capacity

Channel capacity, in electrical engineering, computer science and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.We first show that C ( p 1 × p 2 ) ≥ C ( p 1 ) + C ( p 2 ) {displaystyle C(p_{1} imes p_{2})geq C(p_{1})+C(p_{2})} . Channel capacity, in electrical engineering, computer science and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

[ "Communication channel", "Z-channel", "Binary symmetric channel", "Lovász number", "ergodic channel capacity", "Blahut–Arimoto algorithm" ]
Parent Topic
Child Topic
    No Parent Topic