language-icon Old Web
English
Sign In

The Computational Explanatory Gap

2014 
Efforts to study consciousness using computational models over the last two decades have received a decidedly mixed reception. Investigators in mainstream AI have largely ignored this work, and some members of the philosophy community have argued that the whole endeavor is futile. Here we suggest that very substantial progress has been made, to the point where the use of computational simulations has become an increasingly accepted approach to the scientific study of consciousness. However, efforts to create a phenomenally conscious machine have been much less successful. We believe that this is due to a computational explanatory gap: our inability to understand/explain the implementation of high-level cognitive algorithms in terms of neurocomputational algorithms. Contrary to prevailing views, we will suggest that bridging this gap is not only critical to further progress in the area of machine consciousness, but is also a fundamental step towards solving the hard problem. We briefly describe some small steps we have taken recently to make progress in this area.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    69
    References
    10
    Citations
    NaN
    KQI
    []