logo
    Finding Significant Features for Few-Shot Learning Using Dimensionality Reduction
    1
    Citation
    25
    Reference
    10
    Related Paper
    Citation Trend
    Keywords:
    Discriminative model
    Similarity (geometry)
    Feature vector
    Feature (linguistics)
    Abstract In the paper additional features are constructed in order to increase accuracy or other precision values in the original classification task. This technique is implemented vey often in a lot of machine learning tasks of various domains of knowledge. Usually the second degrees of source features and their products are used. But this process can be continued further to higher degrees. At the same time it increases dimensionality of tasks dramatically. The balance between the dimensionality problems and new features addition is discussed in the present work. The principal component analysis is used to reduce the dimensionality. These sequential steps allow to construct new space containing new features that depend from the source parameters non-linearly. The technique is discussed on the example of the heart diseases dataset. Also functional dependencies in the medical dataset are observed.
    The “curse of dimensionality” in machine learning refers to the increasing data training requirements for features collected from high-dimensional spaces. Researchers generally use one of several dimensionality reduction methods to visualize data and estimate data trends. Feature engineering and selection minimize dimensionality and optimize algorithms. Di- mensionality must be matched to the data to preserve information. This paper compares the final model evaluation dimensionality reduction methods. First, encode the data set in a smaller dimension to avoid the curse of dimensionality and train the model with a manageable number of features.
    Data set
    Multifactor dimensionality reduction
    Intrinsic dimension
    ENCODE
    Citations (0)
    Ω Abstract - Dimensionality reduction is the conversion of high- dimensional data into a meaningful representation of reduced data. Preferably, the reduced representation has a dimensionality that corresponds to the essential dimensionality of the data. The essential dimensionality of data is the minimum number of parameters needed to account for the observed properties of the data (4). Dimensionality reduction is important in many domains, since it facilitates classification, visualization, and compression of high-dimensional data, by helpful the curse of dimensionality and other undesired properties of high-dimensional spaces (5). Dimension reduction can be beneficial not only for reasons of computational efficiency but also because it can improve the accuracy of the analysis. In this research area, it significantly reduces the storage spaces.
    Representation
    External Data Representation
    Multifactor dimensionality reduction
    Citations (0)
    In the procedure of hyperspectral data dimensionality reduction (DR), intrinsic dimensionality (ID) of high-dimensional hyperspectral data is normally obtained through the linear dimensionality analysis methods. This article applies a kind of unsupervised learning method, manifold learning method, to the dimensionality analysis for hyperspectral data and gives a manifold-learning-based algorithm for dimensionality analysis of hyperspectral data. The experiments use ISOMAP, LLE, LE and LTSA algorithms to estimate the intrinsic dimensionality of hyperspectral simulated data and real data, get the two-dimension manifold figures of high-dimensional data and discuss the advantages and disadvantages of these algorithms in hyperspectral dimensionality analysis.
    Isomap
    Intrinsic dimension
    Manifold (fluid mechanics)
    Citations (2)
    Dimensionality reduction methods compute a mapping from a high-dimensional space to a space with lower dimensions while preserving important information. The idea of hybridizing dimensionality reduction with evolution strategies is that the search in a space that employs a larger dimensionality than the original solution space may be easier. We propose a dimensionality reduction evolution strategy (DRES) based on a self-adaptive (μ, λ)-ES that generates points in a space with a dimensionality higher than the original solution space. After the population has been generated, it is mapped to the solution space with dimensionality reduction (DR) methods, the solutions are evaluated and the best w.r.t. the fitness in the original space are inherited to the next generation. We employ principal component analysis (PCA) as DR method and show a performance tweak on a small set of benchmark problems.
    Benchmark (surveying)
    Diffusion map
    In order to mitigate the dependence on virtual dimensionality (VD), this chapter develops a new dimensionality reduction by transform (DRT), to be called progressive spectral dimensionality process (PSDP), which introduces a new concept of dimensionality prioritization (DP) that revolutionizes how the commonly used Dimensionality reduction (DR) is implemented. The motivation of DP arises from a need to process vast amount of hyperspectral data in a more effective manner in many applications. The DP developed in this chapter attempts to resolve the following three issues. The first and foremost is to develop a credible DR transform that can compress the original data into a spectral-transformed data space in some sense of optimality. A second issue is to represent the original data in a spectral dimensionality reduced lower data space via a DRT. Finally, a third issue is to prioritize each spectral dimension in the new reduced spectral data space.
    We present a new neural model that extends the classical competitive learning by performing a principal components analysis (PCA) at each neuron. This model represents an improvement with respect to known local PCA methods, because it is not needed to present the entire data set to the network on each computing step. This allows a fast execution while retaining the dimensionality-reduction properties of the PCA. Furthermore, every neuron is able to modify its behavior to adapt to the local dimensionality of the input distribution. Hence, our model has a dimensionality estimation capability. The experimental results we present show the dimensionality-reduction capabilities of the model with multisensor images.
    Diffusion map
    Competitive learning
    Data set
    Citations (15)
    Neural network tree (NNTree) is one of the efficient models for pattern recognition. One drawback in using an NNTree is that the system may become very complicated if the dimensionality of the feature space is high. To avoid this problem, we propose in this paper to reduce the dimensionality first using linear discriminant analysis (LDA), and then induce the NNTree. After dimensionality reduction, the NNTree can become much more simpler. The question is, can we still get good NNTrees in the lower dimensional feature space? To answer this question, we conducted experiments on several public databases. Results show that the NNTree obtained after dimensionality reduction usually has less number of nodes, and the performance is comparable with the one obtained without dimensionality reduction.
    Feature (linguistics)
    Feature vector
    Tree (set theory)
    Citations (0)