Feed-forward versus recurrent architecture and local versus cellular automata distributed representation in reservoir computing for sequence memory learning

2020 
Reservoir computing based on cellular automata (ReCA) constructs a novel bridge between automata computational theory and recurrent neural networks. ReCA has been trained to solve 5-bit memory tasks. Several methods are proposed to implement the reservoir where the distributed representation of cellular automata (CA) in recurrent architecture could solve the 5-bit tasks with minimum complexity and minimum number of training examples. CA distributed representation in recurrent architecture outperforms the local representation in recurrent architecture (stack reservoir), then echo state networks and feed-forward architecture using local or distributed representation. Extracted features from the reservoir, using the natural diffusion of CA states in the reservoir offers the state-of-the-art results in terms of feature vector length and the required training examples. Another extension is obtained by combining the reservoir CA states using XOR, Binary or Gray operator to produce a single feature vector to reduce the feature space. This method gives promising results, however using the natural diffusion of CA states still outperform. ReCA can be considered to operate around the lower bound of complexity; due to using the elementary CA in the reservoir.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    52
    References
    1
    Citations
    NaN
    KQI
    []