Subspace Support Vector Data Description
4
Citation
9
Reference
20
Related Paper
Citation Trend
Abstract:
This paper proposes a novel method for solving one-class classification problems. The proposed approach, namely Subspace Support Vector Data Description, maps the data to a subspace that is optimized for one-class classification. In that feature space, the optimal hypersphere enclosing the target class is then determined. The method iteratively optimizes the data mapping along with data description in order to define a compact class representation in a low-dimensional feature space. We provide both linear and non-linear mappings for the proposed method. Experiments on 14 publicly available datasets indicate that the proposed Subspace Support Vector Data Description provides better performance compared to baselines and other recently proposed one-class classification methods.Keywords:
Hypersphere
Feature vector
Feature (linguistics)
Representation
Cite
Multi-view subspace clustering is an important and hot topic in machine learning field, which aims to promote clustering results based on multi-view data, which are collected from different domains or various measurements. In this paper, we propose a novel tensor-based intrinsic subspace representation learning for multi-view clustering. Specifically, to investigate the underlying subspace representation, the rank preserving decomposition accompanied with the tensor-singular value decomposition based low-rank tensor constraint is introduced and applied on the subspace representation matrices of multiple views. The specific information of different views can be considered by the rank preserving decomposition and the high-order correlations of multi-view data are fully explored by the low-rank tensor constraint in our method. Based on the learned subspace representation, clustering results can be obtained by employing the standard spectral clustering algorithm. The objective function is efficiently optimized by utilizing the augmented Lagrangian multiplier based alternating direction minimization algorithm. Experimental results on nine real-world datasets illustrate the superiority of our method compared to several state-of-the-arts.
Rank (graph theory)
Representation
Cite
Citations (0)
Hypersphere
Feature vector
Cite
Citations (3)
High dimensional data often lie approximately in low dimensional subspaces corresponding to multiple classes or categories. Segmenting the high dimensional data into their corresponding low dimensional subspaces is referred as subspace clustering. State of the art methods solve this problem in two steps. First, an affinity matrix is built from data based on self-expressiveness model, in which each data point is expressed as a linear combination of other data points. Second, the segmentation is obtained by spectral clustering. However, solving two dependent steps separately is still suboptimal. In this paper, we propose a joint affinity learning and spectral clustering approach for low-rank representation based subspace clustering, termed Low-Rank and Structured Sparse Subspace Clustering (LRS 3 C), where a subspace structured norm that depends on subspace clustering result is introduced into the objective of low-rank representation problem. We solve it efficiently via a combination of Linearized Alternation Direction Method (LADM) with spectral clustering. Experiments on Hopkins 155 motion segmentation database and Extended Yale B data set demonstrated the effectiveness of our method.
Spectral Clustering
Data point
Clustering high-dimensional data
Rank (graph theory)
Cite
Citations (15)
In this paper, we introduce a new domain adaptation (DA) algorithm where the source and target domains are represented by subspaces spanned by eigenvectors. Our method seeks a domain invariant feature space by learning a mapping function which aligns the source subspace with the target one. We show that the solution of the corresponding optimization problem can be obtained in a simple closed form, leading to an extremely fast algorithm. We present two approaches to determine the only hyper-parameter in our method corresponding to the size of the subspaces. In the first approach we tune the size of subspaces using a theoretical bound on the stability of the obtained result. In the second approach, we use maximum likelihood estimation to determine the subspace size, which is particularly useful for high dimensional data. Apart from PCA, we propose a subspace creation method that outperform partial least squares (PLS) and linear discriminant analysis (LDA) in domain adaptation. We test our method on various datasets and show that, despite its intrinsic simplicity, it outperforms state of the art DA methods.
Domain Adaptation
Cite
Citations (37)
Representation
Cite
Citations (3)
Cite
Citations (3)
The accuracy of classification and retrieval significantly depends on the metric used to compute the similarity between samples. For preserving the geometric structure, the symmetric positive definite (SPD) manifold is introduced into the metric learning problem. However, the SPD constraint is too strict to describe the real data distribution. In this paper, we extend the intrinsic metric learning problem to semi-definite case, by which the data distribution is better described for various classification tasks. First, we formulate the metric learning as a minimization problem to the SPD manifold on subspace, which not only considers to balance the information between inner classes and inter classes by an adaptive tradeoff parameter but also improves the robustness by the low-rank subspaces presentation. Thus, it benefits to design a structure-preserving algorithm on subspace by using the geodesic structure of the SPD subspace. To solve this model, we develop an iterative strategy to update the intrinsic metric and the subspace structure, respectively. Finally, we compare our proposed method with ten state-of-the-art methods on four data sets. The numerical results validate that our method can significantly improve the description of the data distribution, and hence, the performance of the image classification task.
Robustness
Manifold (fluid mechanics)
Cite
Citations (1)
Spectral Clustering
Data point
Cosine similarity
Coefficient matrix
Representation
Cite
Citations (27)
Discriminative model
Kernel (algebra)
Representation
Contextual image classification
Cite
Citations (25)
Multiple observation improves the performance of 3D object classification. However, since the distribution of feature vectors obtained from multiple view points have strong nonlinear structure, the kernel-based methods are often introduced with nonlinear mapping. By mapping feature vectors to a higher dimensional space, kernel-based methods transform the distribution to weaken its nonlinearity. Although they have been succeeded in many applications, their computation cost is large. Therefore we aim to construct a comparable method with the kernel-based methods without using nonlinear mapping. Firstly we attempt to approximate a distribution of feature vectors with multiple local subspaces. Secondly we combine local subspace approximation with ensemble learning algorithm to form a new classifier. We will demonstrate that our method can achieve comparable performance with kernel methods through evaluation experiments using multiple view images of 3D objects from a public data set.
Kernel (algebra)
Feature vector
Cite
Citations (6)