language-icon Old Web
English
Sign In

K-SVD

In applied mathematics, K-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations, via a singular value decomposition approach. K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. K-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis. In applied mathematics, K-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations, via a singular value decomposition approach. K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. K-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis. The goal of dictionary learning is to learn an overcomplete dictionary matrix D ∈ R n × K {displaystyle Din mathbb {R} ^{n imes K}} that contains K {displaystyle K} signal-atoms (in this notation, columns of D {displaystyle D} ). A signal vector y ∈ R n {displaystyle yin mathbb {R} ^{n}} can be represented, sparsely, as a linear combination of these atoms; to represent y {displaystyle y} , the representation vector x {displaystyle x} should satisfy the exact condition y = D x {displaystyle y=Dx} , or the approximate condition y ≈ D x {displaystyle yapprox Dx} , made precise by requiring that ‖ y − D x ‖ p ≤ ϵ {displaystyle |y-Dx|_{p}leq epsilon } for some small value ε and some Lp norm. The vector x ∈ R K {displaystyle xin mathbb {R} ^{K}} contains the representation coefficients of the signal y {displaystyle y} . Typically, the norm p {displaystyle p} is selected as L1, L2, or L∞. If n < K {displaystyle n<K} and D is a full-rank matrix, an infinite number of solutions are available for the representation problem. Hence, constraints should be set on the solution. Also, to ensure sparsity, the solution with the fewest nonzero coefficients is preferred. Thus, the sparsity representation is the solution of either

[ "Sparse approximation", "Image (mathematics)", "dictionary pair learning", "k singular value decomposition" ]
Parent Topic
Child Topic
    No Parent Topic