Wednesday Mar 02, 2011

GlassFish 3.1 continues to support auto-clustering of both
conventional (service HA only) and enhanced (service and data HA) MQ
clusters in LOCAL and REMOTE JMS integration modes. Besides these, we
have also added support for Embedded mode conventional clusters. In
the Embedded mode, the broker is started in the same process space
(JVM) as GlassFish and hence eliminates the overhead of multiple
processes. This mode is now the default for both clustered and
stand-alone GlassFish instances. However, while the Embedded mode for
stand-alone (non-clustered) GlassFish servers with MQ use direct
in-process communication with the MQ broker, clustered instances use
TCP. This communication mode is selected automatically and it cannot
be configured by the user. Direct mode communication uses API calls
and completely bypasses the network stack. As a consequence, there is
a significant speed-up. This cannot be used in the case of clustered
instances since when running in a cluster these is a need to be able
to handle broker or connection failures by failing over to another
broker.

The following MQ clustering modes and JMS integration
options are supported MQ conventional cluster with master broker
in EMBEDDED and LOCAL modesMQ conventional cluster without master
broker in both EMBEDDED and LOCAL modesMQ enhanced cluster -
LOCAL mode only

Lazy-initialization
of MQ broker in embedded mode

In the embedded mode,
the start-up of the MQ broker is deferred until it is really
required. A light weight Grizzly Service is configured to listen on
the JMS port. When a request comes in on the JMS port for the first
time, the MQ broker is started-up before processing the request. The
grizzly service proxies all subsequent requests on this port to the
MQ broker. This behaviors is controlled by the JmsHost.lazy-init
property of the default_JMS_host in domain.xml. The value is true by
default. To disable lazy initialization, turn the lazy-init flag to
false. This will disable the grizzly service and the MQ broker will
be started-up eagerly along with the GlassFish server. A restart of
the GF server is required to enable this change once the lazy-init
property is changed.

Support
for Dynamic Cluster changesIn
Glassfish v2.1, the MQ broker address-list was populated only during
start-up. As a consequence, any changes to the cluster topology at
run-time were not reflected until the entire cluster was restarted.
As an enhancement in Glassfish 3.1, we now support dynamic
changes in cluster topologies. The JMS service listens for cluster
change events. These changes are propagated to the MQ broker
dynamically and hence eliminating the need for a restart.

Improvements
to MQ conventional cluster with master broker

Conventional
clusters in MQ have traditionally required configuring a master
broker for certain admin related operations like create/update/delete
of durable subscription and physical destination. MQ broker instances
also require to rendezvous with the master broker at start-up. This
imposes the requirement for the master broker to be have been started
before the remaining broker instances can start and function
correctly. They have been several complaints from users running into
"master broker not started" errors if there are delays in
the start-up of the master broker. To address this issue, a new
broker property ‘imq.cluster.
nowaitForMasterBrokerTimeoutInSeconds has been introduced that
can be configured through GlassFish (as a property in the jms-host
element of domain.xml) and this defines the timeout interval before
the instances start reporting the error message. This is designed to
make the MQ cluster more tolerant towards delays in master broker
start-up.

Dynamically
changing the master broker

A significant enhancement to the MQ
conventional cluster is a new feature that allows users to
dynamically change the master broker without requiring a cluster
restart. In earlier releases, changing the master broker required the
user to follow a manual backup and restore operation of the MQ
configuration store and a subsequent restart of the whole cluster.
This is now possible through by running a single Glassfish command -
change-master-broker. As a consequence of this new feature, restart
of the cluster is not required for this operation any longer. By
default, the first configured instance in the GF instance list for
the cluster is the master broker. This can now be changed to any
other Glassfish instance in the cluster. The only restriction is the
chosen instance should be a part of the cluster. While running this
command, ideally all the instances in the cluster should be running.
However, at a minimum the instance associated with the old master
broker and the instance associated with the new master broker should
be running.

MQ
conventional cluster of peer brokers

Another significant feature in this
release is a new mode in MQ clustering called the - MQ conventional
clusters of peers brokers. This mode is newly introduced in MQ 4.5/GF
3.1. In this mode, the earlier limitation of nominating one of the
clustered instances as a master broker is done away with and all MQ
instances are now equal peers. Instead, a user configured database is
used to store shared config data. This mode can be enabled by using
the new CLI command - configure-jms-cluster (converted in the next
section).

New
CLI command to switch between JMS clustering modes

A new CLI command –
configure-jms-cluster, has been introduced to switch between the
different JMS clustering modes. This command allows users to
configure and switch between different MQ clustering modes such as
from conventional to enhanced and vice-versa. The syntax for the
command is -

d.
DB password needs to be passed in through the passwordfile using the
key – AS_ADMIN_JMSDBPASSWORD

The
command only handles the configuration change between the existing
clustering mode to the new one. Hence extreme caution should be taken
when running this against an existing cluster where JMS related
activities have occurred. By JMS related activity, I am referring to
activities such as creation of destinations or durable subscriptions
and exchange of messages. When running against such a cluster, manual
steps need to be followed to back-up the config and message stores.
The steps are detailed-out in the MQ admin guide. The best practice
for running this command is right after you have created a cluster
but haven't added any instances to it. This way you can be sure that
no JMS activities have occurred and this operation is perfectly safe.

Setting
arbitrary broker propsA small but important
improvement to GF JMS is the ability to configure any MQ broker
property through GF. They can be configured in either jms-service or
on jms-host. If the same property is configured on both the
jms-service and jms-host, than the property configured on jms-host
takes precedence. There are two ways to configure these properties –
the first by specifying these properties while using the
create-jms-host command. If you are using local and embedded JMS
integration modes, you will need to set this new Jms-host as the
default-jms-host. The second way of configurating this is by using
the asadmin set command. The property names should be the fully
qualified names and any '.' should be escaped with '\\\\'.