Flexpoint: Predictive Numerics for Deep Learning

2018 
Deep learning has been undergoing rapid growth in recent years thanks to its state-of-the-art performance across a wide range of real-world applications. Traditionally neural networks were trained in IEEE-754 binary64 or binary32 format, a common practice in general scientific computing. However, the unique computational requirements of deep neural network training workloads allow for much more efficient and inexpensive alternatives, unleashing a new wave of numerical innovations powering specialized computing hardware. We previously presented Flexpoint, a blocked fixed-point data type combined with a novel predictive exponent management algorithm designed to support training of deep networks without modifications, aiming at a seamless replacement of the binary32 widely in practice today. We showed that Flexpoint with 16-bit mantissa and 5-bit shared exponent (flex16+S) achieved numerical parity to binary32 in training a number of convolutional neural networks. In the current paper we review the continuing trend of predictive numerics enhancing deep neural network training in specialized computing devices such as the Intel ® N ervana ™ Neural Network Processor.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    12
    Citations
    NaN
    KQI
    []