Computational learning theory applied to discrete-time cellular neural networks
1994
The theory of probably approximately correct (PAC) learning is applied to discrete-time cellular neural networks (DTCNNS). The Vapnik-Chervonenkis dimension of DTCNN is determined. Considering two different operation modes of the network, an upper bound of the sample size for a reliable generalization of DTCNN architecture is given. >
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
7
References
2
Citations
NaN
KQI