Approximate logic neuron model trained by states of matter search algorithm
2019
Abstract An approximate logic neuron model (ALNM) is a single neural model with a dynamic dendritic structure. During the training process, the model is capable of reducing useless synapses and unnecessary branches of dendrites by neural pruning function. It provides a simplified dendritic morphology for each particular problem. Then, the simplified model of ALNM can be substituted with a logic circuit, which is easy to implement on hardware. However, the computational capacity of this model has been greatly restricted by its learning algorithm, the back-propagation (BP) algorithm, because it is sensitive to initial values and easy to be trapped into local minima. To address this critical issue, we have investigated the capabilities of heuristic optimization methods that are acknowledged as global searching algorithms. Through comparison experiments, a states of matter search (SMS) algorithm has been verified to be the most suitable training method for ALNM. To evaluate the performance of SMS, six benchmark datasets are utilized in the experiments. The corresponding results are compared with the BP algorithm, other optimization methods, and several widely used classifiers. In addition, the classification performances of logic circuits trained by SMS are also presented in this study.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
41
References
16
Citations
NaN
KQI