Tuesday, January 12, 2010

Notes on Oracle Coherence

Oracle Coherence is a distributed cache that functionally comparable to Memcached. On top of the basic cache API function, it has some additional capabilities that is attractive for building large scale enterprise applications.

The API is based on the Java Map (Hashtable) Interface, which provides a key/value store semantics where the value can be any Java Serializable object. Coherence allows data stored in multiple caches identified by a unique name (which they called a "named cache").

Oracle Coherence runs on a cluster of identical server machines connected via a network. Within each server, there are multiple layers of software provide a unified data storage and processing abstraction over a distributed environment.

Smart Data Proxy

Application typically runs within a node of the cluster as well. The cache interface is implemented by a set of smart data proxy which knows the location of master (primary) and slave (backup) copy of data based on its key.

Read through with 2 level cache

When the client "read" data from the proxy, it first try to find the data in a local cache (also called the "near cache" within the same machine). If it is not found, the smart proxy will then locate the distributed cache for the corresponding copy (also called the L2 cache). Since this is a read, either a master or a slave copy is fine. If the smart proxy wouldn't find data from the distributed cache, it will lookup data from the backend DB. The return data will then propagate back to the client and the cache will be populated.

Master/Slave data partitioning

Updating data (insert, update, delete) is done in the reverse way. Under the master/slave architecture, all updates will go to the corresponding master node that owns that piece of data. Coherence support two modes of update; "Write through" and "Write behind". "Write through" will update the DB backend immediately after updating the master copy, but before updating the slave copy, and therefore keep the DB always up to date. "Write behind" will update the slave copy and then the DB in an asynchronous fashion. Data lost is possible in "write behind" mode, which has a higher throughput because multiple write can be merge in a single write, resulting in a fewer number of writes.

Moving processing logic towards data

While extracting data from the cache to the application is a typical way of processing data, it is not very scalable when large volume of data is required to be processed. Instead of shipping the data to the processing logic, a much more efficient way is to ship the processing logic to where the data is residing. This is exactly why Oracle Coherence provide an invocableMap interface where the client can provide a "processor" class that get shipped to every node where processing can be conducted with local data. Moving code towards the data dstributed across many nodes also enable parallel processing because now every node can conduct local processing in parallel.

The processor logic is shipped into the processing queue of the execution node, which has an active processor dequeue the processor object and execute it. Notice that this execution is performed in a serial manner, in other words, the processor will completely finished a processing job before proceeding to the next job. There is no worry about multi-threading issue and no need to use locks, and therefore no dead lock issue.

say in a situation when a "job" on some data gets distributed across many nodes how can i ensure that originally invoking thread won't proceed to the next line of code just because it has already finished with all its own data (data on this particular node). i assume invokableMap works asynch. and what i'm asking is how can i implement something like CyclicBarrier in JAVA

Hi,I'm using extended client architecture for CRUD operations. Data addition/modification can be happen at any time and many readers will read data while we are doing additon/updation. Therefore we needs to use locking. But for extended clients coherence cache->lock(key,-1) will give true every time even one process thread has already gained the lock. Please share your comments on it.

@Sanjay - The local cache used as the front-scheme in the near-cache model does not have primary/backup object. As the name states it's only for local, and if the local jvm fails, you lose the object.... But when you start it back, the local cache will be re-populated (when you issue the .get(key) method) from the distributed cache in the back-scheme. So you have 3 copies, not 4.

The idea is to limit the network hops when you know the object is being read/mutated by yourself (i mean your local jvm). The typical situation is when you maintain your HTTP session with a sticky load-balancer where all the request related to the same session always goes to the same app server.

So basically Coherence gives you the option to either optimize your network usage or memory usage. It's your choice. But the bottom line is - you don't need to take care of if while you code, as it just work in the background transparently