APPROXIMATION CAPABILITIES OF NEURAL NETWORKS USING SAMPLING FUNCTIONS
1991
Learning an input and output relationship, given an example of data, can be regarded as an approximation of a mapping function from input to output. In this point of view, we propose a neural network architecture using sampling functions as hidden units which reconstructs the multi-dimensional function. As this architecture employs sampling theory as the background, the number of hidden units required can be determined by the highest frequency of training data and faster learning can be obtained.
3 Summary
We have proposed the neural network architecture using sampling functions in the hidden layer and its learning algorithm. Based on the idea that learning an input and output relationship of training data can be regarded as synthesizing an approximation of a mapping function, we have transformed the mapping function into the representation composed of sampling functions working as basic functions, and then, devised a learning algorithm for irregularly obtained training data. No learning algorithm is needed for regularly obtained training data, that is, every weight in the network can be determined definitely. The simulations have proved that our architecture makes dramatically fast learning possible.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
5
References
1
Citations
NaN
KQI