Gain parameter and dropout-based fine tuning of deep networks

2018 
Training of deep neural networks can involve two phases: unsupervised pre-training and supervised fine tuning. Unsupervised pre-training is used to learn the initial parameter values of deep networks, while as supervised fine tuning improves upon what has been learned in the pre-training stage. Backpropagation algorithm can be used for supervised fine tuning of deep neural networks. In this paper we evaluate the use of backpropagation with gain parameter algorithm for fine tuning of deep networks. We further propose a modification where backpropagation with gain parameter algorithm is integrated with the dropout technique and evaluate its performance in fine tuning of deep networks. The effectiveness of fine tuning done by proposed technique is also compared with other variants of backpropagation algorithm on benchmark datasets. The experimental results show that the fine tuning of deep networks using the proposed technique yields promising results among all the studied methods on the tested datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []