From Arithmetic to Logic based AI: A Comparative Analysis of Neural Networks and Tsetlin Machine

2020 
Neural networks constitute a well-established design method for current and future generations of artificial intelligence. They depends on regressed arithmetic between perceptrons organized in multiple layers to derive a set of weights that can be used for classification or prediction. Over the past few decades, significant progress has been made in low-complexity designs enabled by powerful hardware/software ecosystems. Built on the foundations of finite-state automata and game theory, Tsetlin Machine is increasingly gaining momentum as an emerging artificial intelligence design method. It is fundamentally based on propositional logic based formulation using booleanized input features. Recently developed Tsetlin Machine hardware architecture has demonstrated competitive performance and accuracy as well as opportunities for by-design energy efficiency and explainability. In this paper, we investigate these two architectures closely and perform a comprehensive, comparative analysis considering their architectural subtleties implemented in low-level C language and ignoring any specialized implementations. We study the impact of hyperparameters on both arithmetic and logic basis of learning in terms of performance, accuracy and energy efficiency. We show that Tsetln Machine consistently outperforms artificial neural network in terms of learning convergence and energy efficiency by up to 15×at the cost of higher energy consumption per epoch.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    5
    Citations
    NaN
    KQI
    []