α2-dependent reciprocally convex inequality for stability and dissipativity analysis of neural networks with time-varying delay

2021 
Abstract This paper studies the stability and dissipativity issue for neural networks (NNs) with time-varying delay. Firstly, an α 2 -dependent reciprocally convex inequality is presented, which unifies the traditional reciprocally convex inequality and the improved reciprocally convex inequality as its special cases. Secondly, by using the α 2 -dependent reciprocally convex inequality, a tight upper bound on the time-derivative of the Lyapunov-Krasovskii functional (LKF) is obtained, then the analysis and calculation is simplified due to no other nonlinear terms being introduced. As a result, some sufficient conditions are obtained which can be used to ensure the stability and dissipativity of delayed NNs. Finally, simulations are provided to verify the superiority of the presented method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []