Neural network training for complex industrial applications

2001 
The paper presents two methods of training multilayer perceptrons (MLPs) that use both functional values and co-located derivative values during the training process. The first method extends the standard backpropagation training algorithm for MLPs whereas the second method employs genetic algorithms (GAs) to find the optimal neural network weights using both functional and co-located function derivative values. The GAs used for optimization of the weights of a feedforward artificial neural network use a special reordering of the genotype before recombination. The ultimate goal of this research effort is to be able to train and design an artificial neural networks (ANN) more effectively, i.e., to have a network that generalizes better, learns faster and requires fewer training data points. The initial results indicate that the methods do, in fact, provide good generalization while requiring only a relatively sparse sampling of the function and its derivative values during the training phase, as indicated by the illustrative examples.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    0
    Citations
    NaN
    KQI
    []