This documentation is about Terracotta DSO, an advanced distributed-computing technology aimed at meeting special clustering requirements.

Terracotta products without the overhead and complexity of DSO meet the needs of almost all use cases and clustering requirements. To learn how to migrate from Terracotta DSO to standard Terracotta products, see Migrating From Terracotta DSO. To find documentation on non-DSO (standard) Terracotta products, see Terracotta Documentation. Terracotta release information, such as release notes and platform compatibility, is found in Product Information.

Terracotta Distributed Cache

Introduction

The Terracotta Distributed Cache is an interface providing a simple distributed eviction solution for map elements. The Distributed Cache, implemented with the Terracotta Integration Module tim-distributed-cache, provides a number of advantages over more complex solutions:

Simple – API is easy to understand and code against.

Distributed – Eviction is distributed along with data to maintain coherence.

Standard – Data eviction is based on standard expiration metrics.

Lightweight – Implementation does not hog resources.

Efficient – Optimized for a clustered environment to minimize faulting due to low locality of reference.

Fail-Safe – Data can be evicted even if written by a failed node or after all nodes have been restarted.

How to Implement and Configure

Under the appropriate conditions, the Terracotta Distributed Cache can be used in any Terracotta cluster. If your application can use the Distributed Cache's built-in Map implementation for a cache, you can avoid having to customize your own data structure. See A Simple Distributed Cache for instructions on using the Distributed Cache with the provided Map implementation.

Characteristics and Requirements

The Terracotta Distributed Cache has the following eviction parameters:

Time to Live
The Time to Live (TTL) value determines the maximum amount of time an object can remain in the cache before becoming eligible for eviction, regardless of other conditions such as use. Time to Idle
The Time to Idle (TTI) value determines the maximum amount of time an object can remain idle in the cache before becoming eligible for eviction. TTI is reset each time the object is used.Target Max In-Memory Count
The in-memory count is the maximum number of elements allowed in a region in any one client (any one application server). If this target is exceeded, eviction occurs to bring the count within the allowed target. The flexibility of using a target as opposed to hard limit serves to improve concurrency. 0 means no eviction takes place (infinite size is allowed).

Target Max Total Count – The maximum total number of elements allowed for a region in all clients (all application servers). If this target is exceeded, eviction occurs to bring the count within the allowed target. The flexibility of using a target as opposed to hard limit serves to improve concurrency. 0 means no eviction takes place (infinite size is allowed).

To learn how to configure these eviction parameters, see Usage Pattern.

The Terracotta Distributed Cache requires JDK 1.5 or greater.

Using Your Own Map Implementation

If you choose not to use the provided Map implementation, you must provide your own data structure and take the following steps:

Use a partial-loading data structure for the evictor to target (see [Clustered Data Structures Guide]).

Write start/stop thread-management code to run the evictor.

Include the code from ConcurrentDistributedMap that performs local eviction (see the tim-distributed-cache library).

Implement the Evictor interface from tim-distributed-cache.

See the following sections for an example of how the Terracotta Distributed Cache is intended to function with its built-in Map implementation.

Installing the TIM

To use the Terracotta Distributed Cache, you must both install tim-distributed-cache and include the evictor JAR file in your classpath.

To install the TIM, run the following command from ${TERRACOTTA_HOME}:

For more information on the Terracotta Developer Console, see the console guide.

A Simple Distributed Cache

Clustered applications with a system of record (SOR) on the backend can benefit from a distributed cache that manages certain data in memory while reducing costly application-SOR interactions. However, using a cache can introduce increased complexity to software development, integration, operation, and maintenance.

The Terracotta Distributed Cache includes a distributed Map that can be used as a simple distributed cache. This cache uses the Terracotta Distributed Cache, incorporating all of its benefits. It also takes both established and innovative approaches to the caching model, solving performance and complexity issues by:

obviating SOR commits for data with a limited lifetime;

making cached application data available in-memory across a cluster of application servers;

offering standard methods for working with cache elements and performing cache-wide operations;

incorporating concurrency for readers and writers;

utilizing a flexible map implementation to adapt to more applications;

minimizing inter-node faulting to speed data operations.

Structure and Characteristics

The Terracotta distributed cache is an interface incorporating a distributed map with standard map operations:

Terracotta Distributed Cache in a Reference Application

The [Examinator reference application] uses the Terracotta Distributed Cache to handle pending user registrations. This type of data has a "medium-term" lifetime which needs to be persisted long enough to give prospective registrants a chance to verify their registrations. If a registration isn't verified by the time TTL is reached, it can be evicted from the cache. Only if the registration is verified is it written to the database.

The combination of Terracotta and the Terracotta Distributed Cache gives Examinator the following advantages:

The simple Terracotta Distributed Cache's API makes it easy to integrate with Examinator and to maintain and troubleshoot.

Medium-term data is not written to the database unnecessarily, improving application performance.

Terracotta persists the pending registrations so they can survive node failure.

Terracotta clusters (shares) the pending registration data so that any node can handle validation.