To enable session replication in Tomcat, three different paths can be followed to achieve the exact same thing:

Using session persistence, and saving the session to a shared file system (PersistenceManager + FileStore)

Using session persistence, and saving the session to a shared database (PersistenceManager + JDBCStore)

Using in-memory-replication, using the SimpleTcpCluster that ships with Tomcat 5 (server/lib/catalina-cluster.jar)

In this release of session replication, Tomcat performs an all-to-all replication of session state.
This is an algorithm that is only efficient when the clusters are small. For large clusters, the next
release will support a primary-secondary session replication where the session will only be stored at one
or maybe two backup servers.
Currently you can use the domain worker attribute (mod_jk > 1.2.8) to build cluster partitions
with the potential of very scaleable cluster solution.
In order to keep the network traffic down in an all-to-all environment, you can split your cluster
into smaller groups. This can be easily achieved by using different multicast addresses for the different groups.
A very simple setup would look like this:

What is important to mention here, is that session replication is only the beginning of clustering.
Another popular concept used to implement clusters is farming, i.e., you deploy your apps only to one
server, and the cluster will distribute the deployments across the entire cluster.
This is all capabilities that can go into with the FarmWarDeployer (s. cluster example at server.xml)

In the next section will go deeper into how session replication works and how to configure it.

To make it easy to understand how clustering works, We are going to take you through a series of scenarios.
In the scenario we only plan to use two tomcat instances TomcatA and TomcatB.
We will cover the following sequence of events:

TomcatA starts up

TomcatB starts up (Wait that TomcatA start is complete)

TomcatA receives a request, a session S1 is created.

TomcatA crashes

TomcatB receives a request for session S1

TomcatA starts up

TomcatA receives a request, invalidate is called on the session (S1)

TomcatB receives a request, for a new session (S2)

TomcatA The session S2 expires due to inactivity.

Ok, now that we have a good sequence, we will take you through exactly what happens in the session replication code

TomcatA starts up

Tomcat starts up using the standard start up sequence. When the Host object is created, a cluster object is associated with it.
When the contexts are parsed, if the distributable element is in place in web.xml
Tomcat asks the Cluster class (in this case SimpleTcpCluster) to create a manager
for the replicated context. So with clustering enabled, distributable set in web.xml
Tomcat will create a DeltaManager for that context instead of a StandardManager.
The cluster class will start up a membership service (multicast) and a replication service (tcp unicast).
More on the architecture further down in this document.

TomcatB starts up

When TomcatB starts up, it follows the same sequence as TomcatA did with one exception.
The cluster is started and will establish a membership (TomcatA,TomcatB).
TomcatB will now request the session state from a server that already exists in the cluster,
in this case TomcatA. TomcatA responds to the request, and before TomcatB starts listening
for HTTP requests, the state has been transferred from TomcatA to TomcatB.
In case TomcatA doesn't respond, TomcatB will time out after 60 seconds, and issue a log
entry. The session state gets transferred for each web application that has distributable in
its web.xml. Note: To use session replication efficiently, all your tomcat instances should be
configured the same.

TomcatA receives a request, a session S1 is created.

The request coming in to TomcatA is treated exactly the same way as without session replication.
The action happens when the request is completed, the ReplicationValve will intercept
the request before the response is returned to the user.
At this point it finds that the session has been modified, and it uses TCP to replicate the
session to TomcatB. Once the serialized data has been handed off to the operating systems TCP logic,
the request returns to the user, back through the valve pipeline.
For each request the entire session is replicated, this allows code that modifies attributes
in the session without calling setAttribute or removeAttribute to be replicated.
a useDirtyFlag configuration parameter can be used to optimize the number of times
a session is replicated.

TomcatA crashes

When TomcatA crashes, TomcatB receives a notification that TomcatA has dropped out
of the cluster. TomcatB removes TomcatA from its membership list, and TomcatA will no longer
be notified of any changes that occurs in TomcatB.
The load balancer will redirect the requests from TomcatA to TomcatB and all the sessions
are current.

TomcatB receives a request for session S1

Nothing exciting, TomcatB will process the request as any other request.

TomcatA starts up

Upon start up, before TomcatA starts taking new request and making itself
available to it will follow the start up sequence described above 1) 2).
It will join the cluster, contact TomcatB for the current state of all the sessions.
And once it receives the session state, it finishes loading and opens its HTTP/mod_jk ports.
So no requests will make it to TomcatA until it has received the session state from TomcatB.

TomcatA receives a request, invalidate is called on the session (S1)

The invalidate is call is intercepted, and the session is queued with invalidated sessions.
When the request is complete, instead of sending out the session that has changed, it sends out
an "expire" message to TomcatB and TomcatB will invalidate the session as well.

TomcatB receives a request, for a new session (S2)

Same scenario as in step 3)

TomcatA The session S2 expires due to inactivity.

The invalidate is call is intercepted the same was as when a session is invalidated by the user,
and the session is queued with invalidated sessions.
At this point, the invalidated session will not be replicated across until
another request comes through the system and checks the invalid queue.

Phuuuhh! :)

Membership
Clustering membership is established using very simple multicast pings.
Each Tomcat instance will periodically send out a multicast ping,
in the ping message the instance will broad cast its IP and TCP listen port
for replication.
If an instance has not received such a ping within a given timeframe, the
member is considered dead. Very simple, and very effective!
Of course, you need to enable multicasting on your system.

TCP Replication
Once a multicast ping has been received, the member is added to the cluster
Upon the next replication request, the sending instance will use the host and
port info and establish a TCP socket. Using this socket it sends over the serialized data.
The reason I choose TCP sockets is because it has built in flow control and guaranteed delivery.
So I know, when I send some data, it will make it there :)

Distributed locking and pages using frames
Tomcat does not keep session instances in sync across the cluster.
The implementation of such logic would be to much overhead and cause all
kinds of problems. If your client accesses the same session
simultaneously using multiple requests, then the last request
will override the other sessions in the cluster.

The cluster configuration is described in the sample server.xml file.
What is worth to mention is that the attributes starting with mcastXXX
are for the membership multicast ping, and the attributes starting with tcpXXX
are for the actual TCP replication.

The membership is established by all the tomcat instances are sending broadcast messages
on the same multicast IP and port.
The TCP listen port, is the port where the session replication is received from other members.

The replication valve is used to find out when the request has been completed and initiate the
replication.

One of the most important performance considerations is the synchronous (pooled or not pooled) versus asynchronous replication
mode. In a synchronous replication mode the request doesn't return until the replicated session has been
sent over the wire and reinstantiated on all the other cluster nodes.
There are two settings for synchronous replication. Pooled or not pooled.
Not pooled (i.e. replicationMode="fastasnycqueue" or "synchronous") means that all the replication request are sent over a single
socket.
Using synchronous mode can potentially becomes a bottleneck when a lot of messages generated,
You can overcome this bottleneck by setting replicationMode="pooled" but then you worker threads blocks with replication .
What is recommended here is to increase the number of threads that handle
incoming replication request. This is the tcpThreadCount property in the cluster
section of server.xml. The pooled setting means that we are using multiple sockets, hence increases the performance.
Asynchronous replication, should be used if you have sticky sessions until fail over, then
your replicated data is not time crucial, but the request time is, at this time leave the tcpThreadCount to
be number-of-nodes-1.
During async replication, the request is returned before the data has been replicated. async replication yields shorter
request times, and synchronous replication guarantees the session to be replicated before the request returns.

The parameter "replicationMode" has four different settings: "pooled", "synchronous", "asynchronous" and "fastasyncqueue"

The default mode configuration setup a fastasyncqueue mode cluster configuration with following
parameters:

Open Membership receiver at 228.0.0.4 and send to multicast udp port 8012

Send membership every 1 sec and drop member after 30sec.

Open message receiver at default ip interface at first free port between 8015 and 8019.

Receiver message with SocketReplicationListener

Configure a ReplicationTransmitter with fastasyncqueue sender mode.

Add ClusterSessionListener and ReplicationValve.

NOTE: Use this configuration when you need very quick a test cluster with
at your developer machine. You can change the default attributes from cluster sub elements.
Use following cluster attribute prefixes sender.,
receiver., service., manager., valve. and listener..
Example configure cluster at windows laptop with network connection and
change receiver port range

WARNING: When you add you sub elements, there overwrite the defaults complete.
Example configure cluster with cluster failover jsessionid support. In this
case you need also the defaultmode Cluster listener ClusterSessionListener and ReplicationValve.

Example to get a lot of statistic information, wait for ACK and
recover after connection failure. Wait 5 secs with attribute recoverTimeout, make 6 trails
with attribute recoverCounter and use 30 secs (mcastDropTime="30000") timeout
at Service element

Timeout that session state transfer is complete. Is attribute stateTransferTimeout == -1
then application wait that other node send the complete session state

60 sec

sendAllSessions

Flag to send sessions as splited blocks

true

sendAllSessionsSize

Number of serialize sessions inside a send block session message. Only useful when sendAllSessions==false

1000

sendAllSessionsWaitTime

wait time between two session send blocks.

2000 msec

sendClusterDomainOnly

Send all session messages only to member inside same cluster domain
(value od Membership attribute mcastClusterDomain). Also don't handle
session messages from other domains.

true

stateTimestampDrop

DeltaManager queued Sessions messages when send GET_ALL_SESSION to other node.
with stateTimestampDrop all messages before state transfer message creation date (find session) are dropped.
Only other GET_ALL_SESSION events are handle with date before state transfer message.

true

updateActiveInterval

Send session access message every updateActiveInterval sec.

60

expireTolerance

Autoexpire backup session after MaxInactive + expireTolerance sec.

300

Example send all sessions at separate blocks. Serialize and send 100 session inside one block.
Wait maximale two minutes before the complete backup sessions are loaded inside tomcat boot process.
Between send blocks wait 5 secs to transfers the session block to other node. This save memory
when you use the async modes with queues.

Note:
As Cluster.defaultMode=true you can configure the manager attributes with prefix manager..
Note:
With Cluster.setProperty(<String>,<String>) you can modify
attributes for all register managers. The method exists as MBeans operation.

As you configure more then two nodes at same cluster for backup, most loadbalancer
send don't all your requests after failover to the same node.

The JvmRouteBinderValve handle tomcat jvmRoute takeover using mod_jk module after node
failure. After a node crashed the next request going to other cluster node. The JvmRouteBinderValve
now detect the takeover and rewrite the jsessionid
information to the backup cluster node. After the next response all client
request goes direct to the backup node. The change sessionid send also to all
other cluster nodes. Well, now the session stickyness work directly to the
backup node, but traffic don't go back too restarted cluster nodes!
As jsessionid was created by cookie, the change JSESSIONID cookie resend with next response.

You must add JvmRouteBinderValve and the corresponding cluster message listener JvmRouteSessionIDBinderListener.
As you add the new listener you must also add the default ClusterSessionListener that receiver the normal cluster messages.

Hint:
With attribute sessionIdAttribute you can change the request attribute name that included the old session id.
Default attribuite name is org.apache.catalina.cluster.session.JvmRouteOrignalSessionID.

Trick:
You can enable this mod_jk turnover mode via JMX before you drop a node to all backup nodes!
Set enable true on all JvmRouteBinderValve backups, disable worker at mod_jk
and then drop node and restart it! Then enable mod_jk Worker and disable JvmRouteBinderValves again.
This use case means that only requested session are migrated.