language-icon Old Web
English
Sign In

Multiple signal classification

MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation and radio direction finding. MUSIC (MUltiple SIgnal Classification) is an algorithm used for frequency estimation and radio direction finding. In many practical signal processing problems, the objective is to estimate from measurements a set of constant parameters upon which the received signals depend. There have been several approaches to such problems including the so-called maximum likelihood (ML) method of Capon (1969) and Burg's maximum entropy (ME) method. Although often successful and widely used, these methods have certain fundamental limitations (especially bias and sensitivity in parameter estimates), largely because they use an incorrect model (e.g., AR rather than special ARMA) of the measurements. Pisarenko (1973) was one of the first to exploit the structure of the data model, doing so in the context of estimation of parameters of complex sinusoids in additive noise using a covariance approach. Schmidt (1977), while working at Northrop Grumman and independently (1979) were the first to correctly exploit the measurement model in the case of sensor arrays of arbitrary form. Schmidt, in particular, accomplished this by first deriving a complete geometric solution in the absence of noise, then cleverly extending the geometric concepts to obtain a reasonable approximate solution in the presence of noise. The resulting algorithm was called MUSIC (MUltiple SIgnal Classification) and has been widely studied. In a detailed evaluation based on thousands of simulations, the Massachusetts Institute of Technology's Lincoln Laboratory concluded that, among currently accepted high-resolution algorithms, MUSIC was the most promising and a leading candidate for further study and actual hardware implementation. However, although the performance advantages of MUSIC are substantial, they are achieved at a cost in computation (searching over parameter space) and storage (of array calibration data). MUSIC estimates the frequency content of a signal or autocorrelation matrix using an eigenspace method. This method assumes that a signal, x ( n ) {displaystyle x(n)} , consists of p {displaystyle p} complex exponentials in the presence of Gaussian white noise. Given an M × M {displaystyle M imes M} autocorrelation matrix, R x {displaystyle mathbf {R} _{x}} , if the eigenvalues are sorted in decreasing order, the eigenvectors corresponding to the p {displaystyle p} largest eigenvalues (i.e. directions of largest variability) span the signal subspace. The remaining M − p {displaystyle M-p} eigenvectors span the orthogonal space, where there is only noise. Note that for M = p + 1 {displaystyle M=p+1} , MUSIC is identical to Pisarenko harmonic decomposition. The general idea is to use averaging to improve the performance of the Pisarenko estimator.

[ "Algorithm", "Electronic engineering", "Speech recognition", "Mathematical optimization", "Artificial intelligence", "Estimation of signal parameters via rotational invariance techniques", "root music", "multiple signal classification algorithm", "Pisarenko harmonic decomposition" ]
Parent Topic
Child Topic
    No Parent Topic