Improvement of the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints

2014 
In this work a specific preconditioning technique is developed to improve the convergence speed of a discrete-time recurrent neural network for quadratic optimization with general linear constraints. The discrete-time network is a model recently published with the broadest range of applicability to various optimization problems and constraints. The proposed preconditioning technique is shown to improve the convergence speed of the model significantly, and thus contribute to enhance the application of the model in these problems. In addition to the theoretical analysis, extensive experimental results are presented to illustrate the technique developed, and to show the significant improvement attained.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    3
    Citations
    NaN
    KQI
    []