An Effective Support Vector Data Description with Relevant Metric Learning
3
Citation
13
Reference
10
Related Paper
Citation Trend
Keywords:
Hypersphere
Feature vector
Let Λ be any integral lattice in Euclidean space. It has been shown that for every integer n>0, there is a hypersphere that passes through exactly n points of Λ. Using this result, we introduce new lattice invariants and give some computational results related to two-dimensional Euclidean lattices of class number one.
Hypersphere
Lattice (music)
Cite
Citations (1)
Let M be an immersed orientable complete hypersurface in the Euclidean space \%R\% n+1 , with nonzero constant Gauss Kroneker curvature and finite sectional curvature, then M is a hypersphere; Let M be an immersed compact connected hypersurface in the Euclidean space \%R\% n+1 and satisfy H r=a 1H r-1 +a 2H r-2 +...+a sH r-s , where a 1,...,a s are nonnegative constants,then M is a hypersphere.
Hypersphere
Hypersurface
Constant (computer programming)
Gaussian curvature
Cite
Citations (0)
Abstract Much natural language processing still depends on the Euclidean distance function between the two feature vectors, but the Euclidean distance suffers from severe defects as to feature weightings and feature correlations. In this paper we propose an optimal metric distance function that can be used as an alternative to the Euclidean distance, accommodating the two problems at the same time. This metric is optimal in the sense of global quadratic minimization, and can be obtained from the clusters in the training data in a supervised fashion. We have confirmed the effect of the proposed metric by the sentence retrieval, document retrieval, and K‐means clustering of general vectorial data. © 2006 Wiley Periodicals, Inc. Syst Comp Jpn, 37(9): 12–21, 2006; Published online in Wiley InterScience ( www.interscience.wiley.com ). DOI 10.1002/scj.20533
Feature vector
Feature (linguistics)
Edit distance
Cite
Citations (10)
In this paper we describe Iris recognition using Modified Fuzzy Hypersphere Neural Network (MFHSNN) with its learning algorithm, which is an extension of Fuzzy Hypersphere Neural Network (FHSNN) proposed by Kulkarni et al. We have evaluated performance of MFHSNN classifier using different distance measures. It is observed that Bhattacharyya distance is superior in terms of training and recall time as compared to Euclidean and Manhattan distance measures. The feasibility of the MFHSNN has been successfully appraised on CASIA database with 756 images and found superior in terms of generalization and training time with equivalent recall time.
Hypersphere
Bhattacharyya distance
Cite
Citations (10)
This paper proposes a novel method for solving one-class classification problems. The proposed approach, namely Subspace Support Vector Data Description, maps the data to a subspace that is optimized for one-class classification. In that feature space, the optimal hypersphere enclosing the target class is then determined. The method iteratively optimizes the data mapping along with data description in order to define a compact class representation in a low-dimensional feature space. We provide both linear and non-linear mappings for the proposed method. Experiments on 14 publicly available datasets indicate that the proposed Subspace Support Vector Data Description provides better performance compared to baselines and other recently proposed one-class classification methods.
Hypersphere
Feature vector
Feature (linguistics)
Representation
Cite
Citations (4)
This paper proposes a novel method for solving one-class classification problems. The proposed approach, namely Subspace Support Vector Data Description, maps the data to a subspace that is optimized for one-class classification. In that feature space, the optimal hypersphere enclosing the target class is then determined. The method iteratively optimizes the data mapping along with data description in order to define a compact class representation in a low-dimensional feature space. We provide both linear and non-linear mappings for the proposed method. Experiments on 14 publicly available datasets indicate that the proposed Subspace Support Vector Data Description provides better performance compared to baselines and other recently proposed one-class classification methods.
Hypersphere
Feature vector
Feature (linguistics)
Representation
Cite
Citations (10)
This paper discusses classification using support vector machines in a normalized feature space. We consider both normalization in input space and in feature space. Exploiting the fact that in this setting all points lie on the surface of a unit hypersphere we replace the optimal separating hyperplane by one that is symmetric in its angles, leading to an improved estimator. Evaluation of these considerations is done in numerical experiments on two real-world datasets. The stability to noise of this offset correction is subsequently investigated as well as its optimality.
Hypersphere
Hyperplane
Feature vector
Normalization
Feature (linguistics)
Cite
Citations (114)
This paper presents a novel binary classifier based on two best fitting hyperellipsoids in the feature space, called twin-hyperellipsoidal support vector machine (TESVM). The idea of TESVM is inspired by the minimum volume covering ellipsoid together with twin-hypersphere support vector machine (TH SVM) which is a variant of the well-known support vector data description (SVDD). Following the concept of THSVM, TESVM constructs two hyperellipsoids where each hyperellipsoid is closest to one class but also as far as possible from the other class in order to form a decision boundary. The construction of hyperellipsoids in the feature space is also enabled through the use of empirical feature mapping. The experimental results on several artificial as well as standard real-world datasets are provided to demonstrate the performance of TESVM. Particularly, TESVM outperforms its spherical counterpart in term of classification accuracy.
Hypersphere
Decision boundary
Feature vector
Binary classification
Margin classifier
Hyperplane
Cite
Citations (0)
This paper proposes a novel method for solving one-class classification problems. The proposed approach, namely Subspace Support Vector Data Description, maps the data to a subspace that is optimized for one-class classification. In that feature space, the optimal hypersphere enclosing the target class is then determined. The method iteratively optimizes the data mapping along with data description in order to define a compact class representation in a low-dimensional feature space. We provide both linear and non-linear mappings for the proposed method. Experiments on 14 publicly available datasets indicate that the proposed Subspace Support Vector Data Description provides better performance compared to baselines and other recently proposed one-class classification methods.
Hypersphere
Feature vector
Representation
Feature (linguistics)
Cite
Citations (45)
We study hypersurfaces in the Euclidean space with the following property: the tangential part of the position vector has constant length. As a result, we prove that among the connected and complete hypersurfaces in the Euclidean space, only the hypersphere centered at the origin satisfies the property.
Hypersphere
Characterization
Position (finance)
Constant (computer programming)
Cite
Citations (1)