A custom associative chip used as a building block for a software reconfigurable multi-network simulator

1995 
Artificial neural networks open wide areas of investigation leading to efficient information processing systems, which are expected to behave in a way somehow similar to their biological counterparts. Nevertheless, most efforts have been devoted to finding paradigms, such as multi-layer or dynamic networks, to name a few, and to strive for improvements in their internal algorithms and structures. These efforts have produced very interesting results, but still a fundamental property of neural networks remains, namely the unavoidable residual errors. These errors, resulting from several constraints, altogether theoretical and practical, make the use of a unique neural network for decision processes or pattern recognition safe only on the long run, that is to say when many experiments have been performed on real unknown sets of data, results statistically conforming to the training. Similarly, the complexity of what can be performed by any network is obviously limited: making the learning rules and internal structure more complicated to cope with more difficult tasks, lead to networks which are not only impossible to implement, but even to simulate without excessive difficulty.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []