From synapses to rules
2002
We consider an integrated subsymbolic-symbolic procedure for extracting symbolically explained classification rules from data. A multilayer perceptron maps features into propositional variables and a set of subsequent layers operated by a PAC-like algorithm learns boolean expressions on these variables. The peculiarities of the whole procedure are: (i) we do not know a priori the class of formulas these expressions belong to, rather from time to time we get some information about the class and reduce uncertainty about the current hypothesis; (ii) the mapping from features to variables varies also over time to improve the suitability of the desired classification rules; and (iii) the final shape of the learnt expressions is determined by the learner who can express his preferences both in terms of an error function to be backpropagated along all layers of the proposed architecture and through the choice of a set of free parameters. We review the bases of the first point and then analyze the others in depth. The theoretical tools supporting the analysis are: (1) a new statistical framework that we call algorithmic inference; (2) a special functionality of the sampled points in respect to the formulas, denoted sentineling; and (3) entropy measures and fuzzy set methods governing the whole learning process. Preliminary numerical results highlight the value of the procedure.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
36
References
45
Citations
NaN
KQI