Design environment for hardware generation of SLFF neural network topologies with ELM training capability

2015 
Extreme Learning Machine (ELM) is a noniterative training method suited for Single Layer Feed Forward Neural Networks (SLFF-NN). Typically, a hardware neural network is trained before implementation in order to avoid additional on-chip occupation, delay and performance degradation. However, ELM provides fixed-time learning capability and simplifies the process of re-training a neural network once implemented in hardware. This is an important issue in many applications where input data are continuously changing and a new training process must be launched very often, providing self-adaptation. This work describes a general SLFF-NN design environment to assist in the definition of neural network hardware implementation parameters including real-time ELM training. The software design environment uses initial user-provided input data with information about the type of problem: sample dataset and validated results, input fields, accuracy; and, together with simulation tools, recommends the optimum configuration for the neural topology and automatically generates synthesizable code for the hardware implementation tool. This is possible due to the design of parameter-dependent synthesis code and optimal hardware architecture design for both neural network and ELM training. Results show all the steps required to follow a successful design flow from the software tool to the final running device and, as an application example, the FPGA implementation for realtime detection of brain area in electrode positioning during a Deep Brain Stimulation (DBS) surgery is shown.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    1
    Citations
    NaN
    KQI
    []