Control of Overfitting in Hierarchical Neural Networks with DFP Formula

2008 
This paper discusses a method for avoidance of overfitting which occurs during the training process of hierarchical neural networks with DFP updating formula. In comparison with a case of employing the back propagation algorithm for the updating formula, the training with the DFP formula finishes very quickly, but a frequency of the overfitting increases. In order to decrease the frequency, we have improved the algorithm so that the approximate inverse Hessian matrix is initialized at every hundred learning steps. The improved algorithm is applied to approximations of a sinc function with a variable as well as of a design space of an optimization problem with two variables. An repetition interval of the training procedure at which the approximate inverse Hessian matrix should be initialized is investigated. The interval exerts little influence upon the probability of obtaining good training results, whereas the probability of obtaining good testing results depends on the interval. As a result, the overfitting can be avoided successfully if the appropriate interval is maintained.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []