Partially affine invariant back propagation

2016 
A novel one stage batch training algorithm is proposed in which Newton's method is used to find gains on inputs and hidden unit activations. The method has far less computational complexity than Leverberg-Marquart because the method's Hessian is much smaller than that of LM. Numerical results shows that the method converges faster and is more stable than conjugate gradient and BFGS.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    2
    Citations
    NaN
    KQI
    []