Reduced Overhead Error Compensation for Energy Efficient Machine Learning Kernels

2015 
Low overhead error-resiliency techniques such as RAZOR [1] and algorithmic noise-tolerance (ANT) [2] have proven effective in reducing energy consumption. ANT has been shown to be particularly effective for signal processing and machine learning kernels. In ANT, an explicit estimator block compensates for large magnitude errors in a main block. The estimator represents the overhead in ANT and can be as large as 30%. This paper presents a low overhead ANT technique referred to as ALG-ANT. In ALG-ANT, the estimator is embedded inside the main block via algorithmic reformulation and thus completely eliminates the overhead associated with ANT. However, ALG-ANT is algorithm-specific. This paper demonstrates the ALG-ANT concept in the context of a finite impulse response (FIR) filter kernel and a dot product kernel, both of which are commonly employed in signal processing and machine learning applications. The proposed ALG-ANT FIR filter and dot product kernels are applied to the feature extractor (FE) and SVM classification engine (CE) of an EEG seizure classification system. Simulation results in a commercial 45nm CMOS process show that ALG-ANT can compensate for error rates of up to 0.41 (errors in FE only), and up to 0.19 (errors in FE and CE) and maintain the true positive rate ptp > 0.9 and false positive rate pfp ≤ 0.01. This represents a greater than 3-orders-of-magnitude improvement in error tolerance over the conventional architecture. This error tolerance is employed to reduce energy via the use of voltage overscaling (VOS). ALG-ANT is able to achieve 44.3% energy savings when errors are in FE only, and up to 37.1% savings when errors are in both FE and CE.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    5
    Citations
    NaN
    KQI
    []