A Joint Optimization Framework of the Embedding Model and Classifier for Meta-Learning

2021 
The aim of meta-learning is to train the machine to learn quickly and accurately. Improving the performance of the meta-learning model is important in solving the problem of small samples and in achieving general artificial intelligence. A meta-learning method based on feature embedding that exhibits good performance on the few-shot problem was previously proposed. In this method, the pretrained deep convolution neural network was used as the embedding model of sample features, and the output of one layer was used as the feature representation of samples. The main limitation of the method is the inability to fuse low-level texture features and high-level semantic features of the embedding model and joint optimization of the embedding model and classifier. Therefore, a multilayer adaptive joint training and optimization method of the embedding model was proposed in the current study. The main characteristics of the current method include using multilayer adaptive hierarchical loss to train the embedding model and using the quantum genetic algorithm to jointly optimize the embedding model and classifier. Validation was performed based on multiple public datasets for meta-learning model testing. The proposed method shows higher accuracy compared with multiple baseline methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    0
    Citations
    NaN
    KQI
    []