Tag: Pantheon

Musings on the WordPress REST API. I did a webinar (gag) with Pantheon covering real world examples, writing custom endpoints, and including test coverage. Turned out surprisingly well, if I do say so myself.

Happy to announce the latest and greatest of WP LCache, [v0.5.0](https://github.com/lcache/wp-lcache/releases/tag/v0.5.0).

This release focuses predominantly on splitting WordPress’ alloptions cache into separate cache keys to mitigate cache pollution caused by race conditions. [Read #31245](https://core.trac.wordpress.org/ticket/31245) for all of the gory details. Thanks to [Ryan](https://twitter.com/rmccue) and [Joe](https://twitter.com/joe_hoyle) for paving the way.

If you haven’t thought about WP LCache since the [v0.1.0 announcement](https://handbuilt.co/2016/09/08/introducing-wp-lcache-v0-1-0/), you’re missing out. WP LCache is faster than other object cache implementations because:

* By using APCu, which is in-memory, WP LCache uses the fastest possible persistent object cache backend and avoids costly network connections on every request. When using a Memcached or Redis-based persistent object cache where Memcached or Redis is on a different machine, the millisecond cost of each cache hit can add up to seconds of network transactions on every request.
* By incorporating a common L2 cache, WP LCache synchronizes cache data between multiple web nodes. Cache updates or deletes on one node are then applied to all other nodes. Without this synchronization behavior, APCu can’t be used in server configurations with multiple web nodes because the cache pool is local to the machine.

Still not convinced? WP LCache has a variety of features no one else has, including native handling of cache groups, meaning you can delete an entire group of keys with `wp_cache_delete_group( $group );`.

Props again to [David Strauss](https://twitter.com/davidstrauss) for his ongoing work on the LCache library, and to [Pantheon](https://pantheon.io/) for sponsoring open source infrastructure.

Under the hood, WP LCache uses [LCache](https://github.com/lcache/lcache), a library that applies the tiered caching model of multi-core processors (with local L1 and central L2 caches) to web applications. In this particular configuration, APCu is the L1 cache and the database is the L2 cache.

APCu is the fastest persistent cache backend you can use, because it exists in PHP memory. However, APCu traditionally can’t be used on multiple web nodes because each node represents a different cache pool. Because WP LCache has a database-based L2 cache, a cache update or delete on one node is then automatically synchronized to all other nodes.

Props to [David Strauss](https://twitter.com/davidstrauss) for his hard work on the LCache library. Thanks to [Steve Persch](https://twitter.com/stevector) and [Josh Koenig](https://twitter.com/outlandishjosh) for their help with the WordPress implementation.

A quote I put together for an upcoming Pantheon whitepaper on scaling WordPress.

Used appropriately, a persistent object cache like Redis or Memcached can be an incredibly helpful tool for scaling your WordPress site.

Say, for instance, you have an unavoidable query which takes an entire second to run. Or, you need to make an API request to a service that’s notoriously slow. You can mitigate the performance impact in both cases by storing the result of the operation in the persistent object cache, and then using the stored reference when render your response.

Like a fine wine with steak, persistent object caches like Redis or Memcached are best paired with complex data generation. They provide a useful way to store computed data that was expensive to create, and don’t make sense to waste on a cheap meal.