An attempt of applying the Lagrange-type 1-step-ahead numerical differentiation method to optimize the SGD algorithm in deep learning
2021
The form of the original stochastic gradient descent (SGD) algorithm accords with the definition of forward Euler method, which has some inherent defects. Thus, in order to improve the original SGD algorithm, a Lagrange-type 1-step-ahead numerical differentiation method based parameter update algorithm is presented and validated. Instinctively, the new algorithm can fix some inherent flaws of the SGD algorithm. However, a series of experimental results show that the Lagrange-type 1-step-ahead numerical differentiation method cannot be applied to reduce the computational error of SGD. In addition, this method makes the model appear the phenomenon of non-convergence. Finally, on the basis of comparative experiments, the divergence phenomenon is analyzed and explained.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
21
References
0
Citations
NaN
KQI