To identify listeners, client needs to be able to generate unique listener ids

Optional requirements

With near cache use case in mind and design above, if a node updates the value in the near cache and has signed up for cache modified events, as it stands, it would also receive a notification from the server. However, this node does not know that the event it received is due to its own modification, and near caches might wanna deal with this differently (i.e. if I just updated a value, I know the others will be invalidated but I dont wanna invalidate myself cos I know I have the latest data). Using something like a channel's id to identify whether a notification is local or not is not enough, cos you might have Hot Rod client (i.e. remote cache store client), that opens 20 channels. So, it might be useful for the source of then modifications (put, replace, remove) to be identified at a logical level. This way, the server could identify when a modification comes from a particular logical entity and it could indicate the event "is local", which the client could use to act differently. This can be taken even further to optimise remote events, by only sending one event for all channels belonging to the same logical entity.

Be able to sign up for interested events on all keys in all caches in a cache manager (granularity: 0x02)

At the protocol level, put/replace operations could be combined, or extended, to enable adding remote listeners at the same time as the put/replace is called, as opposed to calling add listener and then calling put/replace (or viceversa)

Should clients acknowledge receipt of events? Do we wanna add event retransmission?

Update (6/11/2012)

- Timing of the events, as highlighted by Vincent, have to be taken into account. If sent too early, it could lead to retrieving stale data (from a node to which data has not yet been replicated), but if sent too late, it has the potential of committing in server but in not in client. The former is more dangerous, so it's probably better to send notifications after the event. Now, to deal with potential stale data, servers could keep track of which data has been retrieved by which client (a similar thing is already available in distributed mode L1 implementation), and if the notification fails to reach the client, it could retry.

- JMS, although it provides a quick fix for remote events, it has a few issues, such as being limited only to Java clients, and it would not work on its own. For example, how would a client now some notification originated locally? Clients are gonna somehow send a client ID of some sort to identify the source of the operation.