An efficient and effective deep convolutional kernel pseudoinverse learner with multi-filter
2021
Abstract The convolutional neural network is the most widely used deep neural network. However, it still has some disadvantages. First, the back-propagation method is usually used in the training of convolutional neural networks, but it has several inherent defects, such as the vanishing gradient problem and exploding gradient problem. Moreover, the training of a convolutional neural network often needs substantial computational resources and time. To solve the above problems, based on pseudoinverse learning (PIL), kernel pseudoinverse learning (KPIL) is proposed, showing improved performance. The number of hidden layer neurons of KPIL is equal to the number of input data and does not need to be set. KPIL uses the kernel method to calculate the output of hidden layer, avoiding the uncertainty of random input weights. Based on KPIL, a deep convolutional kernel pseudoinverse learner with a multi-filter design is proposed, named kernel pseudoinverse learning convolutional neural network (KPIL-CNN), which uses multiple fixed convolutional kernels: kernel pseudoinverse learning filters (KPIL filter), Gabor filters, and random filters. The features obtained by this multi-filter approach are combined into feature maps by learning weights. The experimental results show that the proposed KPIL-CNN performs better than other network models with the same scale in the task of natural image classification. The training of KPIL-CNN is efficient and effective. Furthermore, KPIL-CNN outperforms the existing methods in the classification of the ISIC 2017 skin lesion dataset.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
31
References
0
Citations
NaN
KQI