Deterministic Convergence of Wirtinger-Gradient Methods for Complex-Valued Neural Networks

2017 
In this paper, we establish a deterministic convergence of Wirtinger-gradient methods for a class of complex-valued neural networks on the premise of a limited number of training samples. It is different from the probabilistic convergence results under the assumption that a large number of training samples are available. Weak and strong convergence results for Wirtinger-gradient methods are proved, indicating that Wirtinger-gradient of the error function goes to zero and the weight sequence goes to a fixed value. An upper bound of the learning rate is also provided to guarantee the deterministic convergence of Wirtinger-gradient methods. Simulations are provided to support the theoretical findings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    5
    Citations
    NaN
    KQI
    []