language-icon Old Web
English
Sign In

Artificial Neural Networks

2018 
Artificial Adaptive Systems include Artificial Neural Networks (ANNs or simply neural networks as they are commonly known). The philosophy of neural networks is to extract from data the underlying model that relates this data as an input/output (domain/range) pair. This is quite different from the way most mathematical modeling processes operate. Most mathematical modeling processes normally impose on the given data a model from which the input to output relationship is obtained. For example, a linear model that is a “best fit” in some sense, that relates the input to the output is such a model. What is imposed on the data by artificial neural networks is an a priori architecture rather than an a priori model. From the architecture, a model is extracted. It is clear, from any process that seeks to relate input to output (domain to range), requires a representation of the relationships among data. The advantage of imposing an architecture rather than a data model, is that it allows for the model to adapt. Fundamentally, a neural network is represented by its architecture. Thus, we look at the architecture first followed by a brief introduction of the two types of approaches for implementing the architecture—supervised and unsupervised neural networks. Recall that Auto-CM, which we discuss in Chap. 3, is an unsupervised ANN while K-CM, discussed in Chap. 6, is a supervised version of Auto-CM. However, in this chapter, we show that, in fact, supervised and unsupervised neural networks can be viewed within one framework in the case of the linear perceptron. The chapter ends with a brief look at some theoretical considerations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    13
    Citations
    NaN
    KQI
    []