Some Experiments on Training Radial Basis Functions by Gradient Descent

2004 
In this paper we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consists of a unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in some papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training of fully supervised training and we conclude that Online training leads to a reduction in the number of iterations and therefore increase the speed of convergence.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    4
    Citations
    NaN
    KQI
    []