Implementation of neural network with approximations functions

2003 
The purpose of this work is to stimulate a neural network with non-linear activation functions. The non-linear functions are simulated in Microsoft Visual Studio C++ 6.0 to observe the precision and to implement on the programmable logic devices. This network is realized to accept very small input values. The multiplication between input values and weight values is realized with the add-logarithm and exponential functions. One approximates all the non-linear functions with linear functions using shift-add blocks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []