Interpretability in neural networks towards universal consistency

2021 
Abstract In the challenge of Artificial Intelligence in processing semantically evaluable information, the application of deep learning techniques depends not only on the algorithms, but also on the principles that explain how they work. The malfunction of a machine learning system, ML, can occur due lack of knowledge of the algorithm intended behavior. The difficulty in debugging ML can be overcome by using strategies based on the universal structure of language that overlaps in the cognitive architecture of biological and intelligent systems. The appropriate choice of an algorithm inspired by the functioning of human language offers the computational scientist methodological strategies to clarify its performance analysis to optimize the interpretative activity under the good instrumentation of the system and to reach the performance level of an application considered safe. Neurolinguistic principles that link interpretation to language and cognition; the semantic dimension that arises not only from the linguistic system, but also from the context in which the information is produced; and the theoretical bases for understanding language as a 'form' (process) and not as a substance (set of signs) provide the groundwork for the intelligent systems’ improvement so that they have universal consistency and lessen the effects of the ‘curse of dimensionality’ or of the bias in the interpretation by the system. Semantics and statistics are considered to understand universal consistency as opposed to ideal consistency when evaluating a data set, since training alone is not sufficient to avoid data manipulation. We conclude that the 'key' for a good information classifier to achieve an acceptable performance of neural networks is in the dynamic aspect of language (language as a form / process) that: Guides the apprehension of how neural networks have access to weights (values); replicates this for intelligent systems making them invariant to many input transformations and guarantees an infinite amount of finite sample information, avoiding semantic distortion.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    76
    References
    0
    Citations
    NaN
    KQI
    []