Convex quadratic optimization on artificial neural networks

1994 
We present continuous-valued Hopfield recurrent neural networks on which we map convex quadratic optimization problems. We consider two different convex quadratic programs, each of which is mapped to a different neural network. Activation functions are shown to play a key role in the mapping under each model. The class of activation functions which can be used in this mapping is characterized in terms of the properties needed. It is shown that the first derivatives of penalty as well as barrier functions belong to this class. The trajectories of dynamics under the first model are shown to be closely related to affine-scaling trajectories of interior-point methods. On the other hand, the trajectories of dynamics under the second model correspond to projected steepest descent pathways.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []