Class: ActiveSupport::Cache::Store

Overview

An abstract cache store class. There are multiple cache store
implementations, each having its own additional features. See the classes
under the ActiveSupport::Cache module, e.g.
ActiveSupport::Cache::MemCacheStore. MemCacheStore is currently the most
popular cache store for large production websites.

Some implementations may not support all methods beyond the basic cache
methods of fetch, write, read, exist?,
and delete.

Keys are always translated into Strings and are case sensitive. When an
object is specified as a key, its cache_key method will be called
if it is defined. Otherwise, the to_param method will be called.
Hashes and Arrays can be used as keys. The elements will be delimited by
slashes and Hashes elements will be sorted by key so they are consistent.

cache.read("city")==cache.read(:city)# => true

Nil values can be cached.

If your cache is on a shared infrastructure, you can define a namespace for
your cache entries. If a namespace is defined, it will be prefixed on to
every key. The namespace can be either a static value or a Proc. If it is a
Proc, it will be invoked when each key is evaluated so that you can use
application logic to invalidate keys.

cache.namespace=lambda{@last_mod_time}# Set the namespace to a variable
@last_mod_time=Time.now# Invalidate the entire cache by changing namespace

Caches can also store values in a compressed format to save space and
reduce time spent sending data. Since there is some overhead, values must
be large enough to warrant compression. To turn on compression either pass
:compress => true in the initializer or to fetch or
write. To specify the threshold at which to compress values, set
:compress_threshold. The default threshold is 32K.

Fetches data from the cache, using the given key. If there is data in the
cache with the given key, then that data is returned.

If there is no such data in the cache (a cache miss occurred), then nil
will be returned. However, if a block has been passed, then that block will
be run in the event of a cache miss. The return value of the block will be
written to the cache under the given cache key, and that return value will
be returned.

Setting :compress will store a large cache entry set by the call
in a compressed format.

Setting :expires_in will set an expiration time on the cache. All
caches support auto expiring content after a specified number of seconds.
This value can be specified as an option to the construction in which call
all entries will be affected. Or it can be supplied to the fetch
or write method for just one entry.

Setting :race_condition_ttl is very useful in situations where a
cache entry is used very frequently and is under heavy load. If a cache
expires and due to heavy load seven different processes will try to read
data natively and then they all will try to write to cache. To avoid that
case the first process to find an expired cache entry will bump the cache
expiration time by the value set in :race_condition_ttl. Yes this
process is extending the time for a stale value by another few seconds.
Because of extended life of the previous cache, other processes will
continue to use slightly stale data for a just a big longer. In the
meantime that first process will go ahead and will write into cache the new
value. After that all the processes will start getting new value. The key
is to keep :race_condition_ttl small.

If the process regenerating the entry errors out, the entry will be
regenerated after the specified number of seconds. Also note that the life
of stale cache is extended only if it expired recently. Otherwise a new
value is generated and :race_condition_ttl does not play any role.

Other options will be handled by the specific cache store implementation.
Internally, #fetch calls #read_entry, and calls #write_entry on a cache
miss. options will be passed to the #read and #write calls.

For example, MemCacheStore's #write method supports the :raw
option, which tells the memcached server to store all values as strings. We
can use this option with #fetch too: