A Least-Squares Derivation of Output Error Feedback for Direct State Estimate Correction in Recurrent Neural Networks

2021 
Iterative least-mean square learning algorithms for recurrent neural networks, such as dynamic backpropagation, have been derived to adapt network weights, but do not directly correct the state estimate. The network estimate of the unknown system’s state is propagated without direct compensation. This paper proposes a more general least-mean square problem that produces both dynamic backpropagation weight adjustment and linear output error feedback state estimate correction. The resulting topology is that of an extended Kalman filter with a feedforward network generating the state predictions. The output error feedback eliminates the need to perform state estimate regulation indirectly through weight adjustments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []