logo
    Optimal Training of Feedforward Neural Networks Using Teaching-Learning-Based Optimization with Modified Learning Phases
    14
    Citation
    43
    Reference
    10
    Related Paper
    Citation Trend
    The following sections are included:IntroductionBackground on Neural NetworksStructure of a Feedforward Neural NetworkExample 1: Simple Example of Fitting a Nonlinear Function to Claim SeveritySeverity Trend ModelsA One Node Neural NetworkFitting the CurveFitting the Neural NetworkThe Fitted CurveThe Logistic Function RevisitedExample 2: Using Neural Networks to Fit a Complex Nonlinear FunctionThe Chain Ladder MethodModeling Loss Development Using a Two-Variable Neural NetworkInteractionsCorrelated Variables and Dimension ReductionFactor Analysis and Principal Components AnalysisFactor AnalysisPrincipal Components AnalysisExample 3: Dimension ReductionConclusionAcknowledgmentsReferences
    Feedforward neural network
    Citations (0)
    Unsupervised learning is widely recognized as one of the most important challenges facing machine learning nowa- days. However, in spite of hundreds of papers on the topic being published every year, current theoretical understanding and practical implementations of such tasks, in particular of clustering, is very rudimentary. This note focuses on clustering. I claim that the most signif- icant challenge for clustering is model selection. In contrast with other common computational tasks, for clustering, dif- ferent algorithms often yield drastically different outcomes. Therefore, the choice of a clustering algorithm, and their pa- rameters (like the number of clusters) may play a crucial role in the usefulness of an output clustering solution. However, currently there exists no methodical guidance for clustering tool-selection for a given clustering task. Practitioners pick the algorithms they use without awareness to the implications of their choices and the vast majority of theory of clustering papers focus on providing savings to the resources needed to solve optimization problems that arise from picking some concrete clustering objective. Saving that pale in com- parison to the costs of mismatch between those objectives and the intended use of clustering results. I argue the severity of this problem and describe some recent proposals aiming to address this crucial lacuna.
    Conceptual clustering
    Implementation
    Consensus clustering
    Constrained clustering
    Citations (1)
    To find the optimal neural network structure,based on the research methods from the complex network,the structure of multi-layer forward neural networks model was studied,and a new neural networks model,NW multi-layer forward small world artificial neural networks was proposed,whose structure of layer was between the regular model and the stochastic model.At first,the regular of multilayer feed-forward neural network neurons randomized cross-layer link back layer with a probability p,and constructed the new neural network model.Secondly,the cross-layer small world artificial neural networks were used for function approximation under different re-wiring probability.The count of convergence under different probability was compared by setting a same precision.Simulation shows that the small-world neural network has a better convergence speed than regular network and random network nearly p=0.08,and the optimum performance of the NW multi-layer forward small world artificial neural network is proved in the right side of probability increases.
    Feedforward neural network
    Feed forward
    Small-world network
    Physical neural network
    Activation function
    Citations (1)
    Artificial neural networks (ANN), especially with error back-propagation (BP) training algorithms, have been widely investigated and applied in various science and engineering fields. However, the BP algorithms are essentially gradient-based iterative methods, which adjust the neural-network weights to bring the network input/output behavior into a desired mapping by taking a gradient-based descent direction. This kind of iterative neural-network (NN) methods has shown some inherent weaknesses, such as, 1) the possibility of being trapped into local minima, 2) the difficulty in choosing appropriate learning rates, and 3) the inability to design the optimal or smallest NN-structure. To resolve such weaknesses of BP neural networks, we have asked ourselves a special question: Could neural-network weights be determined directly without iterative BP-training? The answer appears to be YES, which is demonstrated in this chapter with three positive but different examples. In other words, a new type of artificial neural networks with linearly-independent or orthogonal activation functions, is being presented, analyzed, simulated and verified by us, of which the neural-network weights and structure could be decided directly and more deterministically as well (in comparison with usual conventional BP neural networks).
    Maxima and minima
    Feedforward neural network
    Backpropagation
    Feed forward
    A three-layer-feedforward BP neural network is investigated in order to solve the key problem of the determination of weights in the evaluation performance of the cleaning performance of the sugarcane harvester.The training samples of the BP neural network are made up of the orthogonal experimental data to enhance the training speed and precision.And then the connecting weights of the trained BP neural network are used to compute the weights of the target factors on the evaluation indexes.The results show that the weights determined by the BP neural network can truthfully reflex the importance of the target factors on the evaluation indexes.
    Feedforward neural network
    Feed forward
    Backpropagation
    Citations (0)
    In this paper,teachers evaluation model based on the combination of neural network is presented.This model may make up for the past model based on neural networks inadequate.And this model not only gives the teacher's score or category,but also gives teacher's score in every aspect.Then several feedforward neural networks are assembled into an ensembled neural network.Each of these feedforward neural networks is same in structure and trained by using the BP learning algorithm.The combination neural network has been compared with the conventional BP neural network on their performance.The results of experiments indicate: the relative error is smaller,so the model is validated.
    Feedforward neural network
    Feed forward
    Citations (0)
    BP neural network is the core part of the feedforward network, and embodies the core and the essence of the parts of the artificial neural network. The good nonlinear mapping ability of BP neural network can be a good application in fault diagnosis. But the traditional BP network has the trend of forgetting old samples during the training process when learning new samples, and exists the defect of low training accuracy. A neural network algorithm of increased state feedback in the output layer is designed in this paper to solve the problem above. The improved BP algorithm is used in the fault diagnosis of automotive engine, the indexes of the automobile exhaust are used as the inputs of the neural network, the outputs corresponding to the different misfire. The simulation results show the proposed algorithm can effectively improve the BP neural network training accuracy, and more accurately to achieve misfire diagnosis.
    Feedforward neural network
    Automotive engine
    Backpropagation
    Citations (2)
    This paper presents a solar power modelling method using artificial neural networks (ANNs). Two neural network structures, namely, general regression neural network (GRNN) feedforward back propagation (FFBP), have been used to model a photovoltaic panel output power and approximate the generated power. Both neural networks have four inputs and one output. The inputs are maximum temperature, minimum temperature, mean temperature, and irradiance; the output is the power. The data used in this paper started from January 1, 2006, until December 31, 2010. The five years of data were split into two parts: 2006–2008 and 2009-2010; the first part was used for training and the second part was used for testing the neural networks. A mathematical equation is used to estimate the generated power. At the end, both of these networks have shown good modelling performance; however, FFBP has shown a better performance comparing with GRNN.
    Feedforward neural network
    Feed forward
    Backpropagation
    Maximum power principle
    Citations (121)