language-icon Old Web
English
Sign In

Concatenated error correction code

In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity.Concatenated codes became widely used in space communications in the 1970s. In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity.Concatenated codes became widely used in space communications in the 1970s. The field of channel coding is concerned with sending a stream of data at the highest possible rate over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates R {displaystyle R} less than a certain threshold C {displaystyle C} , called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length N {displaystyle N} of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with N {displaystyle N} , so such an optimum decoder rapidly becomes infeasible. In his doctoral thesis, Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length. Let Cin be a code, that is, a block code of length n, dimension k, minimum Hamming distance d, and rate r = k/n, over an alphabet A: Let Cout be a code over an alphabet B with |B| = |A|k symbols: The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet B. We use this channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout∘Cin, is thus a code of length Nn over the alphabet A: It maps each input message m = (m1, m2, ..., mK) to a codeword (Cin(m'1), Cin(m'2), ..., Cin(m'N)),where (m'1, m'2, ..., m'N) = Cout(m1, m2, ..., mK). The key insight in this approach is that if Cin is decoded using a maximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), and Cout is a code with length N = 2nr that can be decoded in polynomial time of N, then the concatenated code can be decoded in polynomial time of its combined length n2nr = O(N⋅log(N)) and shows an exponentially decreasing error probability, even if Cin has exponential decoding complexity. This is discussed in more detail in section Decoding concatenated codes.

[ "Block code", "burst error correction", "Hexacode", "Multidimensional parity-check code", "Kibibyte", "Self-synchronizing code" ]
Parent Topic
Child Topic
    No Parent Topic