Convergence of a Neural Network Classifier
1990
In this paper, we prove that the vectors in the LVQ learning algorithm converge. We do this by showing that the learning algorithm performs stochastic approximation. Convergence is then obtained by identifying the appropriate conditions on the learning rate and on the underlying statistics of the classification problem. We also present a modification to the learning algorithm which we argue results in convergence of the LVQ error to the Bayesian optimal error as the appropriate parameters become large.
Keywords:
- Population-based incremental learning
- Machine learning
- Convergence (routing)
- Artificial intelligence
- Mathematical optimization
- Learning classifier system
- Artificial neural network
- Learning vector quantization
- Wake-sleep algorithm
- Stability (learning theory)
- Mathematics
- Pattern recognition
- Stochastic approximation
- Computer science
- Bayesian probability
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
6
References
14
Citations
NaN
KQI