Weight regularisation in particle swarm optimisation neural network training

2014 
Applying weight regularisation to gradient-descent based neural network training methods such as backpropagation was shown to improve the generalisation performance of a neural network. However, the existing applications of weight regularisation to particle swarm optimisation are very limited, despite being promising. This paper proposes adding a regularisation penalty term to the objective function of the particle swarm. The impact of different penalty terms on the resulting neural network performance as trained by both backpropagation and particle swarm optimisation is analysed. Swarm behaviour under weight regularisation is studied, showing that weight regularisation results in smaller neural network architectures and more convergent swarms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    13
    Citations
    NaN
    KQI
    []