Information Fusion via Deep Cross-Modal Factor Analysis

2019 
In this paper, we introduce Deep Cross-Modal Factor Analysis (DCFA) to identify complex nonlinear transformations of two variables for information fusion. DCFA is able to represent the coupled patterns between two different sets of variables by minimizing the Frobenius norm distance in the transformed domain. Unlike previous kernel methods, the feature mapping of DCFA is achieved with deep networks (DN) instead of the traditional kernel method. Therefore, the representation of DCFA method is not limited by the fixed kernel. Moreover, DCFA can be considered as a nonlinear extension of the linear Cross-Modal Factor Analysis (CFA), and an alternative to the nonparametric method Kernel Cross-Modal Factor Analysis (KCFA) and the recently proposed Deep Canonical Correlation Analysis (Deep CCA) method. The performance of DCFA is evaluated on MNIST handwritten digit dataset and two audio emotion datasets. Experimental results show that the proposed solution outperforms the methods of KCCA, KCFA, Deep CCA and the deep learning based method-Alexnet, in terms of accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    2
    Citations
    NaN
    KQI
    []