Smarter Caching for Better Performance

A fast processor is useless if it cannot quickly access data, which is why CPUs have local caches built in, so the cores have the data they want right there. Unfortunately while the speed of processors has increased, their ability to communicate with their caches has not kept pace. Researchers at MIT though have developed two means to more intelligent use the caches, and thus improve performance.

Multi-core chips sport caches at multiple levels; one level holds caches with each core and another holds data for the entire processor to access. The protocol for storing data follows a principle of spatiotemporal locality, which assumes if a core accesses a piece of data once, it will likely request it again, and will likely request data near it in the main memory as well. While this has been working well for some time, there are some cases where it fails, such as when the data a core is working on exceeds its local storage. One of the MIT proposals would have that data shared between the local cache and the last-level-cache (LLC), which serves the entire chip. The data will also not be unnecessarily swapped around. If multiple cores need access to the same data, that data will then be stored in the LLC instead of the local caches, which will require constant updating.

The other caching method proposed would change how the LLC is treated. Instead of using it as a single memory bank, with data only stored in it once and spread out, it would be copied to blocks near the cores that need it. This will increase performance for those situations when multiple cores need access to the same data infrequently.

Though these are separate methods, the researchers are working on how to integrate them both into the same chip. As both require actively monitoring a processor's operation though, more circuitry will have to be added, about five-percent the size of the LLC. As transistors keep shrinking though, and communication becomes more important, chip-space is less crucial a concern.