A convergence analysis for neural networks with constant learning rates and non-stationary inputs

1995 
A novel deterministic approach to the convergence analysis of (stochastic) temporal neural networks is presented. The link between the two is a new concept of time-average invariance (TAI) which is a property of deterministic signals but with applications to stochastic signals. With this new concept, the conventional ODE method can be extended to the case of constant learning rate. With weaker conditions, not requiring mutually independence, it is shown that a temporal neural network is /spl epsiv/-convergent to x/sup 0/, if its associated (autonomous) equations are asymptotically stable at x/sup 0/. This result is then extended to the case of perturbed TAI signals. A temporal neural network for blind signal separation is used as an example.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    7
    Citations
    NaN
    KQI
    []