Cargando…
Characterization of data compression across CPU platforms and accelerators
The ever increasing amount of generated data makes it more and more beneficial toutilize compression to trade computations for data movement and reduced storagerequirements.Lately,dedicatedacceleratorshavebeenintroducedtooffloadcompres-sion tasks from the main processor. However, research is lacking...
Autores principales: | , , |
---|---|
Lenguaje: | english |
Publicado: |
2021
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1002/cpe.6465 http://cds.cern.ch/record/2809706 |
Sumario: | The ever increasing amount of generated data makes it more and more beneficial toutilize compression to trade computations for data movement and reduced storagerequirements.Lately,dedicatedacceleratorshavebeenintroducedtooffloadcompres-sion tasks from the main processor. However, research is lacking when it comes to thesystem costs for incorporating compression. This is especially true for the influence ofthe CPU platform and accelerators on the compression. This work will show that forgeneral-purpose lossless compression algorithms following can be recommended: (1)snappyforhighthroughput,butlowcompressionratio;(2)zstandard level 2formoderatethroughputandcompressionratio;(3)xz level 5forlowthroughput,buthighcompressionratio.Anditwillshowthattheselectedplatforms(ARM,IBMorIntel)have no influence on the algorithm’s performance. Furthermore, it will show that theaccelerator’s zlib implementation achieves a comparable compression ratio aszliblevel 2on a CPU, while having up to 17×the throughput and utilizing over 80%less CPU resources. This suggests that the overhead of offloading compression is lim-ited but present. Overall, this work will allow system designers to identify deploymentopportunities for compression while considering integration constraints. |
---|