Assessing Robustness of Hyperdimensional Computing Against Errors in Associative Memory : (Invited Paper)
2021
Brain-inspired hyperdimensional computing (HDC) is an emerging computational paradigm that has achieved success in various domains. HDC mimics brain cognition and lever-ages hyperdimensional vectors with fully distributed holographic representation and (pseudo)randomness. Compared to the traditional machine learning methods, HDC offers several critical advantages, including smaller model size, less computation cost, and one-shot learning capability, making it a promising candidate in low-power platforms. Despite the growing popularity of HDC, the robustness of HDC models has not been systematically explored. This paper presents a study on the robustness of HDC to errors in associative memory—the key component storing the class representations in HDC. We perform extensive error injection experiments to the associative memory in a number of HDC models (and datasets), sweeping the error rates and varying HDC configurations (i.e., dimension and data width). Empirically, we observe that HDC is considerably robust to errors in the associative memory, opening up opportunities for further optimizations. Further, results show that HDC robustness varies significantly with different HDC configurations such as data width. Moreover, we explore a low-cost error masking mechanism in the associative memory to enhance its robustness.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
32
References
0
Citations
NaN
KQI