Evaluation of PNN pattern-layer activation function approximations in different training setups

2019 
The processing of inputs in the first two layers of the probabilistic neural network (PNN) is highly parallel which makes it quite appropriate for hardware implementations with FPGA. One of the main inconveniences however remains the implementation of the nonlinear activation function of the pattern layer neurons. In the present study, we investigate the applicability of three approximations of the exponential activation function with look-up tables of different precision and the effect this has on the training process and the classification accuracy. Furthermore, seeking for a highly-parallel hardware-friendly algorithm for the automated adjustment of the spread factor \(\sigma _i\), we investigated the performance of fifteen PNN training setups, which are based on the differential evolution (DE) or unified particle swarm optimization (UPSO) methods. The experimental evaluation was performed following a common experimental protocol, which makes use of the Parkinson Speech Dataset, as this research aims to support the development of portable medical devices that are capable to detect episodes with exacerbation in patients with Parkinson’s disease. The performance of the most successful setups is discussed in terms of error rates and from the perspective of the resources required for an FPGA-based implementation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []