Data compression, as a process, aims to satisfy the modern world’s need for speed and efficiency by reducing the cost of storing and transmitting information. Over the past few years, there have been several attempts to improve the performance and reduce the execution times of older compression algorithms by adapting them to make use of parallel programming architectures and hardware. The prevalent trend, however, is to focus on compression; the reverse process of decompression is largely ignored. In this report, we develop and compare two versions of the LZ77 decompression algorithm, one that executes sequentially on a Central Processing Unit (CPU) and one that runs in parallel on a GPU (Graphics Processing Unit). Out aim is to investigate the running times of the algorithms, and the memory transfer and thread synchronization costs of the GPU implementation, in an effort to determine if there are any significant performance gains by offloading this operation to the GPU for completion. We experimentally show that the serial execution manages to constantly outperform the parallel one. This is due to the significant memory access and thread waiting times that arise when trying to resolve the data dependencies present in the LZ77 encoded input. Even though memory transfer times remain relatively constant as the input size grows, we conclude that the LZ77 decompression algorithm can not be parallelised efficiently with an approach that relies on in-memory synchronisation.

Email address protected by JavaScript. Activate javascript to see the email.

We use cookies to improve our service for you. You can find more information in our data protection declaration. By continuing to use our site, you accept our use of cookies and Privacy Policy.OkPrivacy policy