language-icon Old Web
English
Sign In

Dither

Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD. A common use of dither is converting a greyscale image to black and white, such that the density of black dots in the new image approximates the average grey level in the original. The term dither was published in books on analog computation and hydraulically controlled guns shortly after World War II. Though he did not use the term dither, the concept of dithering to reduce quantization patterns was first applied by Lawrence G. Roberts in his 1961 MIT master's thesis and 1962 article. By 1964 dither was being used in the modern sense described in this article. Dither is utilized in many different fields where digital processing and analysis are used. These uses include systems using digital signal processing, such as digital audio, digital video, digital photography, seismology, radar and weather forecasting systems. Quantization yields error. If that error is correlated to the signal, the result is potentially cyclical or predictable. In some fields, especially where the receptor is sensitive to such artifacts, cyclical errors yield undesirable artifacts. In these fields introducing dither converts the error to random noise. The field of audio is a primary example of this. The human ear functions much like a Fourier transform, wherein it hears individual frequencies. The ear is therefore very sensitive to distortion, or additional frequency content, but far less sensitive to additional random noise at all frequencies such as found in a dithered signal. The final version of audio that goes onto a compact disc contains only 16 bits per sample, but throughout the production process, a greater number of bits are typically used to represent the sample. In the end, the digital data must be reduced to 16 bits for pressing onto a CD and distributing. There are multiple ways to do this. One can, for example, simply discard the excess bits – called truncation. One can also round the excess bits to the nearest value. Each of these methods, however, results in predictable and determinable errors in the result. Using dither replaces these errors with a constant, fixed noise level. Take, for example, a waveform that consists of the following values:

[ "Electronic engineering", "Computer vision", "Control theory", "Telecommunications", "Artificial intelligence", "dithered quantization" ]
Parent Topic
Child Topic
    No Parent Topic