11 Near Cache

A Near Cache provides the best of both worlds between the extreme performance of the Replicated Cache Service and the extreme scalability of the Partitioned Cache Service by providing fast read access to Most Recently Used (MRU) and Most Frequently Used (MFU) data. The Near Cache wraps two caches: a "front cache" and a "back cache" that automatically and transparently communicate with each other by using a read-through/write-through approach.

The "front cache" provides local cache access. It is assumed to be inexpensive, in that it is fast, and is limited in terms of size. The "back cache" can be a centralized or multitiered cache that can load-on-demand in case of local cache misses. The "back cache" is assumed to be complete and correct in that it has much higher capacity, but more expensive in terms of access speed. The use of a Near Cache is not confined to Coherence*Extend; it also works with TCMP.

This design allows Near Caches to configure levels of cache coherency, from the most basic expiry-based caches and invalidation-based caches, up to advanced data-version caches that can provide guaranteed coherency. The result is a tunable balance between the preservation of local memory resources and the performance benefits of truly local caches.

The typical deployment uses a Local Cache for the "front cache". A Local Cache is a reasonable choice because it is thread safe, highly concurrent, size-limited, and auto-expiring and stores the data in object form. For the "back cache", a remote, partitioned cache is used.

The following figure illustrates the data flow in a Near Cache. If the client writes an object D into the grid, the object is placed in the local cache inside the local JVM and in the partitioned cache which is backing it (including a backup copy). If the client requests the object, it can be obtained from the local, or "front cache", in object form with no latency.

Figure 11-1 Put Operations in a Near Cache Environment

If the client requests an object that has been expired or invalidated from the "front cache", then Coherence automatically retrieves the object from the partitioned cache. The "front cache" is updated with the object and then the object is delivered to the client.

Figure 11-2 Get Operations in a Near Cache Environment

11.1 Near Cache Invalidation Strategies

An invalidation strategy keeps the "front cache" of the Near Cache synchronized with the "back cache." The Near Cache can be configured to listen to certain events in the back cache and automatically update or invalidate entries in the front cache. Depending on the interface that the back cache implements, the Near Cache provides four different strategies of invalidating the front cache entries that have changed by other processes in the back cache

This strategy instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy for the front cache.

Present

This strategy instructs the Near Cache to listen to the back cache events related only to the items currently present in the front cache. This strategy works best when each instance of a front cache contains distinct subset of data relative to the other front cache instances (for example, sticky data access patterns).

All

This strategy instructs the Near Cache to listen to all back cache events. This strategy is optimal for read-heavy tiered access patterns where there is significant overlap between the different instances of front caches.

Auto

This strategy instructs the Near Cache to switch automatically between Present and All strategies based on the cache statistics.

11.2 Configuring the Near Cache

A Near Cache is configured by using the <near-scheme> element in the coherence-cache-config file. This element has two required sub-elements: front-scheme for configuring a local (front-tier) cache and a back-scheme for defining a remote (back-tier) cache. While a local cache (<local-scheme>) is a typical choice for the front-tier, you can also use schemes based on Java Objects (<class-scheme>) and, other than for .Net and C++ clients, non-JVM heap-based caches (<external-scheme> or <paged-external-scheme>).

The remote or back-tier cache is described by the <back-scheme> element. A back-tier cache can be either a distributed cache (<distributed-scheme>) or a remote cache (<remote-cache-scheme>). The <remote-cache-scheme> element enables you to use a clustered cache from outside the current cluster.

Optional sub-elements of <near-scheme> include <invalidation-strategy> for specifying how the front-tier and back-tier objects are kept synchronous and <listener> for specifying a listener which is notified of events occurring on the cache.

11.4 Cleaning Up Resources Associated with a Near Cache

Instances of all NamedCache implementations, including NearCache, should be explicitly released by calling the NamedCache.release() method when they are no longer needed. This frees any resources they might hold.

If the particular NamedCache is used for the duration of the application, then the resources are cleaned up when the application is shut down or otherwise stops. However, if it is only used for a period, the application should call its release() method when finished using it.

11.5 Sample Near Cache Configuration

The following sample code illustrates the configuration of a Near Cache. Sub-elements of <near-scheme> define the Near Cache. Note the use of the <front-scheme> element for configuring a local (front) cache and a <back-scheme> element for defining a remote (back) cache. See the <near-scheme> topic for a description of the Near Cache elements.