Improving DNN Solution using Repeated Training

2019 
We present an approach for improving DNN solution by running multiple instances of the same training (where everything is the same except different seed for weight initialization). We show that significant improvements in accuracy can be achieved by using proposed approach. Additionally, we tested two simple stopping criteria that aim to identify best performing networks in the early stage of the training. This allows us to save majority of computational resources as we fully train only one network while terminating other instances of the training in an early phase. We tested combination of 20 repetitions in repeated training with global and gradual stopping rule. Repeated training with global stopping at approx. 1% of average training time can beat average performing network and stopping at approx. 10% of average training time can significantly outperform average network. Furthermore, this approach does not involve additional manual work and requires only small amount of additional computation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    0
    Citations
    NaN
    KQI
    []