A novel neural network is proposed in this paper for realizing associative memory. The main advantage of the neural network is that each prototype pattern is stored if and only if as an asymptotically stable equilibrium point. Furthermore, the basin of attraction of each desired memory pattern is distributed reasonably (in the Hamming distance sense), and an equilibrium point that is not asymptotically stable is really the state that cannot be recognized. The proposed network also has a high storage as well as the capability of learning and forgetting, and all its components can be implemented. The network considered is a very simple linear system with a projection on a closed convex set spanned by the prototype patterns. The advanced performance of the proposed network is demonstrated by means of simulation of a numerical example.
Almost all the continuous neural networks available now for associative memory are based on optimizing a quadratic function, and each pattern to be recognized is used as a initial point of the network. The disadvantage is that their structure is complicated and their implementation of circuit is difficult to coincide with the theoretical analysis. In this paper, all the patterns considered are on the surface of one ball. Optimizing problem about the distance is sometimes equivalent to that about the inner product. A continuous neural network, which is based on the optimization of a linear function, is thus presented for associative memory, and the pattern to be recognized is regarded as the parameter of the network. It is in fact a network for solving a special optimization problem with hybrid constraint. It is proved that the set of prototype patterns is the same as the set of asymptotically stable equilibrium points. The basin of attraction of each desired memory pattern is distributed reasonably (in the Hamming distance sense) and an equilibrium point that is not asymptotically stable is just the state that can not be recognized. The theoretic analysis demonstrates not only that the proposed network is an ideal model for associative memory, but also that each refused pattern can be explained very clearly, and that the recognition result can be predicted by the motion of the network. The circuit implementation of the proposed network is very much like that for optimization problems. It can easily coincide with theoretical analysis. From the viewpoint of hardware implementation, there is no difference between the pattern to be recognized and the initial point of the network, they can all be regarded as the out inputs. Two numerical simulations show that the exact result can be obtained, although the bigger step and shorter simulation time are taken. The network in this paper thus can reduce requirement for the precision of the hardware.
Momentum technique has recently emerged as an effective strategy in accelerating convergence of gradient descent (GD) methods and exhibits improved performance in deep learning as well as regularized learning. Typical momentum examples include Nesterov's accelerated gradient (NAG) and heavy-ball (HB) methods. However, so far, almost all the acceleration analyses are only limited to NAG, and a few investigations about the acceleration of HB are reported. In this article, we address the convergence about the last iterate of HB in nonsmooth optimizations with constraints, which we name individual convergence. This question is significant in machine learning, where the constraints are required to impose on the learning structure and the individual output is needed to effectively guarantee this structure while keeping an optimal rate of convergence. Specifically, we prove that HB achieves an individual convergence rate of O(1/√t) , where t is the number of iterations. This indicates that both of the two momentum methods can accelerate the individual convergence of basic GD to be optimal. Even for the convergence of averaged iterates, our result avoids the disadvantages of the previous work in restricting the optimization problem to be unconstrained as well as limiting the performed number of iterations to be predefined. The novelty of convergence analysis presented in this article provides a clear understanding of how the HB momentum can accelerate the individual convergence and reveals more insights about the similarities and differences in getting the averaging and individual convergence rates. The derived optimal individual convergence is extended to regularized and stochastic settings, in which an individual solution can be produced by the projection-based operation. In contrast to the averaged output, the sparsity can be reduced remarkably without sacrificing the theoretical optimal rates. Several real experiments demonstrate the performance of HB momentum strategy.
Though the study from the teacher turnover system of Japan,we found that it's not single for teacher turnover institution in social and education system.Firstly,carrying out the teacher turnover system need the urban-rural integration as the exterior safeguard;Secondly,carrying out the teacher turnover system need the leveling of education as an internal support;Finally,carrying out the teacher turnover system is to achieve balance as the ultimate goal of educational content.From the present situation of China's national conditions and the education point of view,the dual pattern of urban-rural imbalance in development and education that we do not yet have the necessary mechanism to implement flow conditions of teachers and infrastructure.
In this paper, one-class and outlier problems are investigated by using the idea of Support Vector Machines. Based on regarding a one-class problem as the one to estimate a function, the generalization error for the one-class problem is defined for the first time. The linear separability, margin and optimal linear classifier are then defined and the regular SVM is reformulated into a framework for one-class problems. Each of the linear algorithms is motivated theoretically and they can be formulated as some linear programming problems. The proposed algorithms can be implemented by the techniques in boosting algorithms. Some synthetic and real experiments illustrate that the algorithms in this paper are practical and effective.