Thanks to the supervised parameter generation strategy and non-iterative training mechanism, deep stochastic configuration network (DSCN) has achieved very efficient modelling efficiency in scenarios with relatively small problem complexity. However, the increasing number of hidden layers and the amount of training data have issued a challenge to the implementation of DSCN. To solve this problem, we propose a Dense DSCN with a Hybrid Training mechanism (HT-DDSCN), which extends the network structure of the DSCN to a dense connection type and combines three typical optimisation techniques and one universal control strategy to optimise the calculation process of the output weights. Extensive experiments on four benchmark regression problems show that HT-DDSCN can significantly improve the generalisation ability and the stability of DSCN.
Conventional training mechanism for deep learning, which is based on gradient descent, suffers from many notorious issues such as low convergence rate, over-fitting, and time-consuming. To alleviate these problems, a novel deep learning algorithm with a different learning mechanism named Broad Learning System (BLS) was proposed by Prof. C. L. Philip Chen in 2017. BLS randomly selects the parameters of the feature nodes and enhancement nodes during its training process and uses the ridge regression theory to solve its output weights. BLS has been widely used in many fields because of its high efficiency. However, there is a fundamental problem that has not yet been solved, that is, the appropriate value of the parameter λ for the ridge regression operation of BLS is difficult to be set properly, which often leads to the problem of over-fitting and seriously limits the development of BLS. To solve this problem, we proposed a novel Dense BLS based on Conjugate Gradient (CG-DBLS) in this paper, in which each feature node is connected to other feature nodes and each enhancement node is connected to other enhancement nodes in a feed-forward fashion. The recursive least square method and conjugate gradient method are used to calculate the output weights of the feature nodes and enhancement nodes respectively. Experiment studies on four benchmark regression problems from UCI repository show that CG-DBLS can achieve much lower error and much higher stability than BLS and its variants.
Clustering based association rule mining algorithms usually deal with data sets by clustering numerical transactions into boolean ones then using the boolean method. However, the numerical data will often change slightly, which may be caused by errors in the data acquisition process or disturbance in the environment, as a result, the obtained association rules will change greatly. Therefore, the uncertainty in the association rule mining should be considered. In this paper, an improved fuzzy clustering based robust association rule mining algorithm (RFARM) is proposed, where a regularization term is added into the objective function in which each point considers its own k-nearest neighbors to offset small disturbance and we also derive the necessary conditions for the convergence to the local minimum. Meanwhile, fuzzy clustering methods with constraints produce ripple parts in membership functions, which cannot be explained in association rule mining. In order to solve this, we design a variant algorithm (RFARM ∗ ) that can perform and be understood better than the frequently-used methods. Experimental results have shown that the proposed methods are superior in the accuracy of association rules and the anti-noise capability.
Fault diagnosis is important to the industrial process. This paper proposes an orthogonal incremental extreme learning machine based on driving amount (DAOI-ELM) for recognizing the faults of the Tennessee-Eastman process (TEP). The basic idea of DAOI-ELM is to incorporate the Gram-Schmidt orthogonalization method and driving amount into an incremental extreme learning machine (I-ELM). The case study for the 2-D nonlinear function and regression problems from the UCI dataset results show that DAOI-ELM can obtain better generalization ability and a more compact structure of ELM than I-ELM, convex I-ELM (CI-ELM), orthogonal I-ELM (OI-ELM), and bidirectional ELM. The experimental training and testing data are derived from the simulations of TEP. The performance of DAOI-ELM is evaluated and compared with that of the back propagation neural network, support vector machine, I-ELM, CI-ELM, and OI-ELM. The simulation results show that DAOI-ELM diagnoses the TEP faults better than other methods.