Leveraging Posit Arithmetic in Deep Neural Networks

2021 
The IEEE 754 Standard for Floating-Point Arithmetic has been for decades imple mented in the vast majority of modern computer systems to manipulate and com pute real numbers. Recently, John L. Gustafson introduced a new data type called positTM to represent real numbers on computers. This emerging format was designed with the aim of replacing IEEE 754 floating-point numbers by providing certain ad vantages over them, such as a larger dynamic range, higher accuracy, bitwise iden tical results across systems, or simpler hardware, among others. The interesting properties of the posit format seem to be really useful under the scenario of deep neural networks. In this Master’s thesis, the properties of posit arithmetic are studied with the aim of leveraging them for the training and inference of deep neural networks. For this purpose, a framework for neural networks based on the posit format is developed. The results show that posits can achieve similar accuracy results as floating-point numbers with half of the bit width without modifications in the training and infer ence flows of deep neural networks. The hardware cost of the posit arithmetic units needed for operating with neural networks (this is, additions and multiplications) is also studied in this work, obtaining great improvements in terms of area and power savings with respect state-of-the-art implementations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []