Resumen

The idea of transparently compressing and decompressing the content of main memory to virtually enlarge their capacity has been
previously proposed and studied in the literature. The rationale behind this idea lies in the nature of some applications whos performance are memory or disk–bounded. For this kind of application it is acceptable to use CPU cycles to compress and decompress data on the fly, thus increasing the available memory. This additional memory capacity can allow the execution of larger applications without swapping, or can significantly reduce the number of disk
access for applications with a working set that largely exceeds the main memory.
Previous studies that have worked on this idea can be classified as either software or hardware based. The software approach is
usually implemented at the operating system level and works on top of commodity hardware. The hardware approach is based on
modified or specialized hardware not present in current systems.
The main advantage of the software approach is that it can run on unmodified commodity systems, while the hardware approach need ad–hoc hardware but it usually provides better performance for a larger set of applications. Although both approaches have
been proved effective for some workloads neither of them has been widely used in production systems.
In the current scenario of many–core systems and heterogeneous processors, the flexibility of a software approach and the
performance of a hardware approach can be combined to boost the real applicability of main memory compression. In this paper we
propose and implement a software memory compression system for the Linux kernel, that offload the CPU–intensive compression
task to the specialized processor units present in the Cell/B.E. We have evaluated our hybrid proposal with the IOzone benchmark, obtaining a 5x speedup with 80% of the system memory used as a compressed cache.