(v2.0) Cachet\Backend\APC and Cachet\Counter\APC are deprecated due to
all PHP versions that don't support opcache being EOL. Use
Cachet\Backend\APCU and Cachet\Counter\APCU instead. The old classes
will not be removed for the time being.

(v2.0.1) Cachet\Backend\Memcache used to have some possible, untested,
undocumented support for the abandoned memcache extension as well as the
actively supported memcached extension. memcache support has been
removed, memcached support remains.

Many "falsey" values are valid cache values, for e.g. null and false.
Find out if a value was actually found:

<?php$cache->set('hmm', false);if (!$cache->get('hmm')) {// this will also execute if the 'false' value was actually// retrieved from the cache}$value=$cache->get('hmm', $found);if (!$found) {// this will only execute if no value was found in the cache.// it will not execute if values which evaluate to false were// retrieved from the cache.}

Cachet provides a convenient way to wrap getting and setting using strategies
with optional locking:

<?php$value=$cache->wrap('foo', function() use ($service, $param) {return$service->doSlowStuff($param);});$dataRetriever=function() use ($db) {return$db->query("SELECT*FROM table")->fetchAll();}// With a TTL$value=$cache->wrap('foo', 300, $dataRetriever);// With a Dependency$value=$cache->wrap('foo', newCachet\Dependency\Permanent(), $dataRetriever);// Set up a rotating pool of 4 file locks (using flock)$hasher=function($cache, $key) {return$cache->id."/".(abs(crc32($key)) %4);};$cache->locker=newCachet\Locker\File('/path/to/locks', $hasher);// Stampede protection - the cache will stop and wait if another concurrent process// is running the dataRetriever. This means that the cache ``set`` will only happen once:$value=$cache->blocking('foo', $dataRetriever);

Iteration - this is tricky and loaded with caveats. See the iteration section
below that describes them in detail:

<?php$cache=newCachet\Cache($id, newCachet\Backend\Memory());$cache->set('foo', 'bar');// this dependency is just for demonstration/testing purposes.// iteration will not return this value as the dependency is invalid$cache->set('baz', 'qux'newCachet\Dependency\Dummy(false));foreach ($cache->values() as$key=>$value) {echo"$key: $value\n";}// outputs "foo: bar" only.

Caches can be iterated, but support is patchy. If the underlying backend
supports listing keys, iteration is usually efficient. The CachetAPCUbackend makes use of the APCIterator class and is very efficient. XCache
tries to send a HTTP authentication dialog when you try to list keys (even when
you try and use it via the CLI!), and Memcached provides no means to iterate
over keys at all.

If a backend supports iteration, it will implement Cachet\Backend\Iterator.
Implementing this interface is not required, but all backends provided with
Cachet do. If the underlying backend doesn't support iteration (Memcache,
for example), Cachet provides optional support for using a secondary backend
which does support iteration for the keys. This slows down insertion, deletion
and flushing, but has no impact on retrieval.

The different types of iteration support provided by the backends are:

iterator

Iteration is implemented efficiently using an \\Iterator class. Keys/items
are only retrieved and yielded as necessary. There should be no memory issues
with this type of iteration.

key array + fetcher

All keys are retrieved in one hit. Items are retrieved one at a time directly
from the backend. Millions of keys may cause memory issues.

all data

Everything is returned in one hit. This is only applied to the in-memory cache
or session cache, where no other option is possible. Thousands of keys may
cause memory issues.

optional key backend

Keys are stored in a secondary iterable backend. Setting, deleting and
flushing will be slower as these operations need to be performed on both the
backend and the key backend. Memory issues are inherited from the key backend,
so you should try to use an Iterator based key backend wherever possible.

Key backend iteration is optional. If no key backend is supplied, iteration
will fail.

Cache backends must implement Cache\Backend, though some backends have to
work a bit harder to satisfy the interface than others.

Different backends have varying degrees of support for the following features:

Automatic Expirations

Some backends support automatic expiration for certain dependency types.
When a backend supports this functionality it will have a
useBackendExpirations property, which defaults to true.

For example, the APCU backend will detect when a Cachet\Dependency\TTL
is passed and automatically use it for the third parameter to
apcu_store, which accepts a TTL in seconds. Other backends support
different methods of unrolling dependency types. This will be documented
below.

Setting useBackendExpirations to false does not guarantee the backend
will not expire cache values under other circumstances.

Iteration

Backends should, but may not necessarily, implement
Cache\Backend\Iterator. Backends that do not can't be iterated. This
will be specified against each backend's documentation. Backends like APCU
or Redis can rely on native methods for iterating over the keys, but the
memcache daemon itself provides no such facility, and Xcache hides it behind
some silly HTTP Basic authentication.

Backends that suffer from these limitations can extend from
Cachet\Backend\IterationAdapter, which allows a second backend to be
used for storing keys. This slows down setting, deleting and flushing, but
doesn't slow down getting items from the backend at all so it's not a bad
tradeoff if iteration is required and you're doing many more reads than
writes.

There are some potential pitfalls with this approach:

If an item disappears from the key backend, it may still exist in the
backend itself. There is no way to detect these values if the backend is not
iterable. Make sure the type of backend you select for the key backend
doesn't auto-expire values under any circumstances, and if your backend
supports useBackendExpirations, set it to false.

The type of backend you can use for the key backend is quite limited - it
must itself be iterable, and it can't be a
Cachet\Backend\IterationAdapter.

Filesystem-backed cache. This has only been tested on OS X and Linux but may
work on Windows (and probably should - please file a bug report if it doesn't).

The cache is not particularly fast. Flushing and iteration can be very, very
slow indeed, but should not suffer from memory issues.

If you use this cache, please do some performance crunching to see if it's
actually any faster than no cache at all.

Iteration support

iterator

Backend expirations

none

<?php// Inherit permissions, user and group from the environment$backend=newCachet\Backend\File('/path/to/cache');// Passing options$backend=newCachet\Backend\File('/path/to/cache', array('user'=>'foo','group'=>'foo','filePerms'=>0666, // Important: must be octal'dirPerms'=>0777, // Important: must be octal));

<?php// Connect on demand. Constructor accepts the same argument as Memcached->addServers()$backend=newCachet\Backend\Memcached(array(array('127.0.0.1', 11211)));// Use existing Memcached instance:$memcached=newMemcached();$memcached->addServer('127.0.0.1');$backend=newCachet\Backend\Memcached($memcached);$backend->useBackendExpirations=true;

Flushing is not supported by default, but works properly when a key backend is
provided. If you don't wish to use a key backend, you can activate unsafe flush
mode, which will simply flush your entire memcache instance regardless of which
cache it was called against.

If you are writing a web application, this should not be done on every request,
it should be done as part of your deployment or setup process.

The PDO backend uses a key array + fetcher for iteration by default, which is
not immune from memory exhaustion problems. The mysqlUnbufferedIteration
gets rid of any memory issues and makes the PDO backend a first class
iteration citizen. The catch is that an extra connection is made to the database
each time the cache is iterated. This connection will remain open as long as the
iterator object returned by $backend->keys() or $backend->items() is in
scope.

<?php// Use an unbuffered query for the key iteration (MySQL only):$backend->mysqlUnbufferedIteration=true;

This option is disabled by default and is ignored if the underlying connector's
engine is not MySQL.

Uses the PHP $_SESSION as the cache. Care should be taken to avoid unchecked
growth. session_start() will be called automatically if it hasn't yet been
called, so if you would like to customise the session startup, call
session_start() yourself beforehand.

Allows multiple backends to be traversed in priority order. If a value is found
in a lower priority backend, it is inserted into every backend above it in the
list.

This works best when the fastest backend has the highest priority (earlier in
the list).

Values are set in all caches in reverse priority order.

Iteration support

Whatever is supported by the lowest priority cache

Backend expiration

N/A

<?php$memory=newCachet\Backend\Memory();$apcu=newCachet\Backend\APCU();$pdo=newCachet\Backend\PDO(array('dsn'=>'sqlite:/path/to/db.sqlite'));$backend=newCachet\Backend\Cascading(array($memory, $apcu, $pdo));$cache=newCachet\Cache('pants', $backend);// Value is cached into Memory, APCU and PDO$cache->set('foo', 'bar');// Prepare a little demonstration$memory->flush();$apcu->flush();// Memory is queried and misses// APCU is queried and misses// PDO is queried and hits// Item is inserted into APCU// Item is inserted into Memory$cache->get('foo');

The simplest caching strategy provided by Cachet is the wrap strategy.
It doesn't do anything to prevent stampedes, but it does not require a locker
and can make your code much more concise by reducing boilerplate. When using
wrap, you can turn the following code:

This requires a locker. In the event of a cache miss, a request will try to
acquire the lock before calling the data retrieval function. The lock will be
released after the data is retrieved. Any concurrent request which causes a
cache miss will block until the request which has acquired the lock releases it.

This strategy shouldn't be adversely affected when useBackendExpirations is
set to true if the backend supports it, though if your cache items
frequently expire after only a couple of seconds you'll probably have a bad
time.

This requires a locker. If the cache misses, the first request will acquire the
lock and run the data retriever function. Subsequent requests will return a
stale value if one is available, otherwise it will block until the first request
finishes, thus guaranteeing a value is always returned.

This strategy will fail if the backend has the useBackendExpirations
property and it is set to true.

This requires a locker. If the cache misses, the first request will acquire the
lock and run the data retriever function. Subsequent requests will return a
stale value if one is available, otherwise they will return nothing immediately.

The API for this strategy is slightly different to the others as it does not
guarantee a value will be returned, so it provides an optional output parameter
$found to signal that the method has returned without retrieving or setting
a value:

This strategy will fail if the backend has the useBackendExpirations
property and it is set to true.

Lockers handle managing synchronisation between requests in the various caching
strategies. They must be able to support blocking on acquire, and should be
able to support a non-blocking acquire.

Lockers are passed the cache and the key when acquired by a strategy. This can
be used raw if you want one lock for every cache key, but if you want to keep
the number of locks down, you can pass a callable as the $keyHasher argument
to the locker's constructor. You can use this to return a less complex version
of the key.

Lockers do not support timeouts. None of the current locking
implemientations allow timeouts, so you'll have to rely on a carefully tuned
max_execution_time property for "safety" in the case of deadlocks. This
may change in future, but cannot change for the existing locker
implementations until platform support improves (which it probably won't).

Cachet supports the notion of cache dependencies - an object implementing
Cachet\Dependency is serialised with your cache value and checked on
retrieval. Any serialisable code can be used in a dependency, so this opens up a
large range of invalidation possibilities beyond what TTL can accomplish.

Dependencies can be passed per-item using Cachet\Cache->set($key, $value,
$dependency), or using the Cachet\Cache->set($key, $value, $ttl)
shorthand. The shorthand is equivalent to $cache->set($key, $value, new
Cachet\Dependency\TTL($ttl)).

Without a dependency, a cached item will stay cached until it is removed
manually or until the underlying backend decides to remove it of its own accord.

You can assign a dependency to be used as the default for an entire cache if
none is provided for an item:

<?php$cache=newCachet\Cache($name, $backend);// all items that do not have a dependency will expire after 10 minutes$cache->dependency=newCachet\Dependency\TTL(600);// this item will expire after 10 minutes$cache->set('foo', 'bar');// this item will expire after 5 minutes$cache->set('foo', 'bar', newCachet\Dependency\TTL(300));

Warning

Just because an item has expired does not mean it has been removed. Expired
items will be removed on retrieval, but garbage collection is a manual
process that should be performed by a separate process.

A cached item will never be expired by Cachet, even if a default dependency
is provided by the Cache. This may be overridden by any environment-specific
backend configuration (for example, the apc.ttl ini setting):

<?php$cache=newCachet\Cache($name, $backend);$cache->dependency=newCachet\Dependency\TTL(600);// this item will expire after 10 minutes$cache->set('foo', 'bar');// this item will never expire$cache->set('foo', 'bar', newCachet\Dependency\Permanent());

This is very similar to the Mtime dependency, only instead of using simple
file mtimes, it uses a secondary cache and checks that the value of a tag has
not changed.

This dependency is slightly more complicated to configure - it requires the
secondary cache to be registered with the primary cache as a service.

<?php$valueCache=newCachet\Cache('value', newCachet\Backend\APCU());$tagCache=newCachet\Cache('value', newCachet\Backend\APCU());$tagCacheServiceId='tagCache';$valueCache->services[$tagCacheServiceId] =$tagCache;// the value at key 'tag' in $tagCache is stored alongside 'foo'=>'bar' in the// $valueCache. It will be checked against whatever is currently in $tagCache// on retrieval$valueCache->set('foo', 'bar', newCachet\Dependency\CachedTag($tagCacheServiceId, 'tag'));$valueCache->set('baz', 'qux', newCachet\Dependency\CachedTag($tagCacheServiceId, 'tag'));// 'tag' has not changed in $tagCache since we set these values in $valueCache$valueCache->get('foo'); // returns 'bar'$valueCache->get('baz'); // returns 'qux'$tagCache->set('tag', 'something else');// 'tag' has since changed, so the values coming out of $valueCache are invalidated$valueCache->get('foo'); // returns null$valueCache->get('baz'); // returns null

Equality comparison is done in loose mode by default (==). You can enable
strict mode comparison by passing a third boolean argument to the constructor:

Strict mode uses === for everything except objects, for which it uses
==. This is because === will never match true for objects as it
compares references only; the values to be compared have each been retrieved
from separate caches so they are highly unlikely to ever share a reference.

<?php$backend=newCachet\Backend\PDO(['dsn'=>'sqlite:/path/to/sessions.sqlite']);$cache=newCachet\Cache('session', $backend);// this must be called before session_start()Cachet\SessionHandler::register($cache);session_start();$_SESSION['foo'] ='bar';

By default, Cachet\SessionHandler does nothing when the gc (garbage
collect) method is called. This is because cache iteration can't be relied upon
to be performant - this is a backend specific characteristic and can vary wildly
(see the iteration section for more details) and it is up to the developer to
be aware of this when selecting a backend.

For backends that don't use an Iterator for iteration, it is strongly
recommended that you implement garbage collection using a separate process
rather than using PHP's gc probability mechanism.

The following backends should not be used with the SessionHandler:

Cachet\Backend\File

This will raise a warning. I can't see any way that PHP's default file
session mechanism isn't superior to this backend - they essentially do the
same thing only one is implemented in C and seriously battle tested, and the
other is not.

Cachet\Backend\Session

This will raise an exception. You can't use the session for storing
sessions.

Cachet\Backend\Memory

This can't possibly work either - the data will disappear when the request
is complete.

Some backends provide methods for incrementing or decrementing an integer
atomically. Cachet attempts to provide a consistent interface to this
functionality.

Unfortunately, it doesn't always succeed. There are some catches (like always):

In some cases, though the backend's increment and decrement methods work
atomcally, they require you to set the value before you can use it in a way
which is not atomic. The Cachet counter interface allows you to call
increment if there is no value already set.

Unfortunately, this means that multiple concurrent processes can call
$backend->increment() and see that nothing is there before one of those
processes has a chance to call set to initialise the counter. Counters
that exhibit this behaviour can be passed an optional locker to mitigate this
problem.

All of the backends support decrementing below zero except Memcache.

Several backends have limits on the maximum counter value and will overflow if
this value is reached. There has not been enough testing done yet to determine
what the maximum value for each counter backend is, and it may be platform and
build dependent. An estimate has been provided, but this is based on the ARM
architeture. YMMV.

Counters do not support dependencies, but some counters do allow a single TTL
to be specified for all counters. This is indicated by the presence of a
$backend->counterTTL property.

There does exist the fabled Counter class that is atomic, does not overflow
and supports any type of cache dependency (Cachet\Counter\SafeCache).
Unfortunately, it is slow and it requires a locker. Fast, secure, cheap,
stable, good. Pick two.

Why aren't counters just a part of Cachet\Cache? I tried to do it that way
first, but after spending a bit of time hacking and unable to escape the feeling
that I was wrecking things that were nice and clean to support it, I realised
that it was a separate responsibility deserving its own hierarchy. There also
isn't a clean 1-to-1 relationship between counters and backends.

Counters implement the Cachet\Counter interface, and support the following
API:

<?php$counter=new\Cachet\Counter\APCU();// Or with optional cache value prefix. Prefix has a forward slash appended.$counter=newCachet\Counter\APCU('myprefix');// TTL$counter->counterTTL=86400;// If you would like set operations to be atomic, pass a locker to the constructor// or assign to the ``locker`` property$counter->locker=new\Cachet\Locker\Semaphore();$counter=new\Cachet\Counter\APCU('myprefix', \Cachet\Locker\Semaphore());

<?php// Construct by passing anything that \Cachet\Connector\Memcache accepts as its first// constructor argument:$counter=new\Cachet\Counter\Memcache('127.0.0.1');// Construct by passing in a connector. This allows you to share a connector instance// with a cache backend:$memcache=new\Cachet\Connector\Memcache('127.0.0.1');$counter=new\Cachet\Counter\Memcache($memcache);$backend=new\Cachet\Backend\Memcache($memcache);// Optional cache value prefix. Prefix has a forward slash appended.$counter=new\Cachet\Counter\Memcache($memcache, 'prefix');// TTL$counter->counterTTL=86400;// If you would like set operations to be atomic, pass a locker to the constructor// or assign to the ``locker`` property$counter->locker=$locker;$counter=new\Cachet\Counter\Memcache($memcache, 'myprefix', $locker);

Unlike the PDO cache backend, different database engines require very different
queries for counter operations. If your PDO engine is sqlite, use
Cachet\Counter\PDOSQLite. If your PDO engine is MySQL, use
Cachet\Counter\PDOMySQL. PDOSQLite may be compatible with other database
backends (though this is untested), but PDOMySQL uses MySQL-specific
queries.

The table name defaults to cachet_counter for all counters. This can be changed.

Suports counterTTL

no

Atomic

probably (I haven't been able to satisfy myself that I have proven this yet)

Range

-INT64_MAX - 1 to INT64_MAX

Overflow error

no

<?php// Construct by passing anything that \Cachet\Connector\PDO accepts as its first// constructor argument:$counter=new\Cachet\Counter\PDOSQLite('sqlite::memory:');$counter=new\Cachet\Counter\PDOMySQL(['dsn'=>'mysql:host=localhost', 'user'=>'user', 'password'=>'password']);// Construct by passing in a connector. This allows you to share a connector instance// with a cache backend:$connector=new\Cachet\Connector\PDO('sqlite::memory:');$counter=new\Cachet\Counter\PDOSQLite($connector);$connector=new\Cachet\Connector\PDO(['dsn'=>'mysql:host=localhost', ...]);$counter=new\Cachet\Counter\PDOMySQL($connector);$backend=new\Cachet\Backend\PDO($connector);// Use a specific table name$counter->tableName='my_custom_table';$counter=new\Cachet\Counter\PDOSQLite($connector, 'my_custom_table');$counter=new\Cachet\Counter\PDOMySQL($connector, 'my_custom_table');

The table needs to be initialised in order to be used. It is not recommended to
do this inside your web application - you should do it as part of your
deployment process or application setup:

It is a lot slower than using the APCU or Redis backends, but faster than using
the PDO-based backends (unless, of course, the cache that you use has a
PDO-based backend itself).

<?php$cache=new\Cachet\Cache('counter', $backend);$locker=new\Cachet\Locker\Semaphore();$counter=new\Cachet\Counter\SafeCache($cache, $locker);// Simulate counterTTL$cache->dependency=new\Cachet\Dependency\TTL(3600);// Or use any dependency you like$cache->dependency=new\Cachet\Dependency\Permanent();

Custom backends are a snap to write - simply implement Cachet\Backend.
Please make sure you follow these guidelines:

Backends aren't meant to be used by themselves - they should be used by an

instance of Cachet\Cache

It must be possible to use the same backend instance with more than one

instance of Cachet\Cache.

get() must return an instance of Cachet\Item. The backend must not

check whether an item is valid as Cachet\Cache depends on an item always
being returned.

Make sure you fully implement get(), set() and delete() at

minimum. Anything else is not strictly necessary, though useful.

set() must store enough information so that get() can return a fully

populated instance of Cachet\Item. This usually means that if your backend
can't support PHP objects directly, you should just serialize() the
Cachet\Item directly.

You can reduce the size of the data placed into the backend by using
Cachet\Item->compact() and Cachet\Item::uncompact(). This strips much of
the redundant information from the cache item. YMMV - I was surprised to find
that using Cachet\Item->compact() had the effect of increasing the memory
used in APCU.

Dependencies are created by implementing Cachet\Dependency. Dependencies are
serialised and stored in the cacne alongside the value. A dependency is always
passed a reference to the current cache when it is used, and care should be
taken never to hold a reference to it, or any other objects that don't directly
relate to the dependency's data as they will also be shoved into the cache, and
trust me - you don't want that.

Cachet is exhaustively tested. As all backends and counters are expected to
satisfy the same interface, for all but a very small number of (hopefully)
well-documented exceptions, all of the functional test cases for these classes
extend from Cachet\Test\BackendTestCase and Cachet\Test\CounterTestCase
respectively.

These tests are run from the root of the project by calling phpunit without
arguments.

Some aspects of Cachet cannot be proven to work using simple unit or
functional tests, for example lockers and counter atomicity. These are tested
using a hacky but workable concurrency tester, which is run from the root of the
project. You can get help on all of the available options like so:

php test/concurrent.php -h

Or just call it without arguments to run all of the concurrency tests using the
default settings. It will exit with status 0 if all tests pass, or 1 if
any of them fail.

Some of the tests are designed to fail, but these contain broken in their
ID. You can exclude unsafe tests like so:

php test/concurrent.php -x broken

I have left the broken tests in to demonstrate conditions where the default
behaviour may defy expectations. I am currently looking for a better way of
reperesenting this in the tester.

The concurrency tester has proven to be excellent at finding heisenbugs in
Cachet. For this reason, it should be run many, many times under several
different load conditions and on different architectures before we can decide
that a build is safe to release.