Contents

Areas of Interest

Most of these areas have wider reaching implications, but are of relatively simpler use in the embedded case, largely thanks to not having to contend with swap and things of that nature. This as well as vendors not afraid of deviation from mainline for product programs makes for an excellent playground for experimenting with new things in the memory management and virtual memory space.

Huge/large/superpages

This applies to both transparent large page usage as well as the more static usage models, primarily relating to work outside of the hugetlb interface/libhugetlbfs.

Embedded systems suffer from very small TLBs generally using PAGE_SIZE'd pages (4kB) for coverage. In most cases this places the system under very heavy pressure for any kind of userspace work, and very visibly degrading performance, with most applications taking anywhere from 5-40% of their time on the CPU servicing page faults.

Preliminary discussion on this subject as well as links to additional information is happening through the wiki here: Huge Pages

Page cache compression

This relates to using various compression algorithms for performing run-time compression and decompression of page cache pages, specifically aimed at both reducing memory pressure as well as helping performance in certain workloads.