Bounds on number of hidden neurons of multilayer perceptrons in classification and recognition

1990 
The use of multilayer perceptrons (MLP) in the realization of arbitrary functions which map from a finite subset of E/sup n/ into E/sup m/ is investigated. A least upper bound of hidden neurons needed to solve this problem is derived. It is shown that as long as the number of hidden neurons exceeds this bound, an MLP can realize arbitrary switching functions without requiring learning algorithms. In studying classification problems, an upper bound which is tighter than the ones obtained with the common assumption of the general position condition on the input set is derived. In addition, a lower bound is derived in addressing recognition problems. >
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    15
    Citations
    NaN
    KQI
    []