Polynomial Data Compression for Large-Scale Physics Experiments

2018 
The next generation of research experiments will introduce a huge data surge to continuously increasing data production by current experiments. This data surge necessitates efficient compression techniques. These compression techniques must guarantee an optimum trade-off between compression rate and the corresponding ratio of compression to decompression speed without affecting the data integrity. This work presents a lossless compression algorithm to compress physics data generated by astronomy, astrophysics and particle physics experiments. The developed algorithms have been tuned and tested on a real-use case: the next-generation, ground-based, high-energy gamma ray observatory, Cherenkov Telescope Array, requiring important compression performance. As a stand-alone method, the proposed compression method is very fast and reasonably efficient. Alternatively, applied as a pre-compression algorithm, it can accelerate common methods like the Lempel–Ziv–Markov chain algorithm (LZMA), keeping close performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    1
    Citations
    NaN
    KQI
    []