New types of information are displayed by dspmqver to support multiple installations. The changes might affect existing administrative scripts you have written to manage WebSphere® MQ.

The changes in output from dspmqver that might affect existing command scripts that you have written are twofold:Version 7.5 has extra -f field options. If you do not specify a -f option, output from all the options is displayed. To restrict the output to the same information that was displayed in earlier releases, set the -f option to a value that was present in the earlier release. Compare the output for dspmqver in Figure 1 and Figure 2 with the output for dspmqver -f 15 in Figure 3.

Note there are a number (1) of other installations, use the '-i' parameter to display them.Figure 3. dspmqver with option to make WebSphere MQ version 7.5 similar to WebSphere MQ version 7.0.1dspmqver -f 15 Name: WebSphere MQVersion: 7.1.0.0Level: p000-L110624BuildType: IKAP - (Production)The heading of the build level row has changed from CMVC level: to Level:.

You can also specify the maximum number of active channels. You can do this to prevent your system being overloaded by a large number of starting channels. If you use this method, you should set the disconnect interval attribute to a low value to allow waiting channels to start as soon as other channels terminate.

Each time a channel that is retrying attempts to establish connection with its partner, it must become an active channel. If the attempt fails, it remains a current channel that is not active, until it is time for the next attempt. The number of times that a channel will retry, and how often, is determined by the retry count and retry interval channel attributes. There are short and long values for both these attributes. See Channel attributes for more information.

When a channel has to become an active channel (because a START command has been issued, or because it has been triggered, or because it is time for another retry attempt), but is unable to do so because the number of active channels is already at the maximum value, the channel waits until one of the active slots is freed by another channel instance ceasing to be active. If, however, a channel is starting because it is being initiated remotely, and there are no active slots available for it at that time, the remote initiation is rejected.

Whenever a channel, other than a requester channel, is attempting to become active, it goes into the STARTING state. This is true even if there is an active slot immediately available, although in this case it will only be in STARTING state for a very short time. However, if the channel has to wait for an active slot, it is in STARTING state while it is waiting.

Requester channels do not go into STARTING state. If a requester channel cannot start because the number of active channels is already at the limit, the channel ends abnormally.

Whenever a channel, other than a requester channel, is unable to get an active slot, and so waits for one, a message is written to the log or the z/OS console, and an event is generated. When a slot is subsequently freed and the channel is able to acquire it, another message and event are generated. Neither of these events and messages are generated if the channel is able to acquire a slot straightaway.

If a STOP CHANNEL command is issued while the channel is waiting to become active, the channel goes to STOPPED state. A Channel-Stopped event is raised as usual.

Server-connection channels are included in the maximum number of active channels.

For more information about specifying the maximum number of active channels, see the WebSphere MQ System Administration Guide for WebSphere MQ for UNIX systems, and Windows systems, theWebSphere MQ for iSeries System Administration Guide for WebSphere MQ for iSeries, or WebSphere MQ Script (MQSC) Command Reference for WebSphere MQ for z/OS.

You can set server-connection channel limits to prevent client applications from exhausting queue manager channel resources and to prevent a single client application from exhausting server-connection channel capacity.

A maximum total number of channels can be active at any time on an individual queue manager, and the total number of server-connection channel instances are included in the maximum number of active channels.

If you do not specify the maximum number of simultaneous instances of a server-connection channel that can be started, then it is possible for a single client application, connecting to a single server-connection channel, to exhaust the maximum number of active channels that are available. When the maximum number of active channels is reached, it prevents any other channels from being started on the queue manager. To avoid this, you must limit the number of simultaneous instances of an individual server-connection channel that can be started, regardless of which client started them.

If the value of the limit is reduced to below the currently running number of instances of the server connection channel, even to zero, then the running channels are not affected. However, new instances cannot be started until sufficient existing instances have ceased to run so that the number of currently running instances is less than the value of the limit.

Also, many different client-connection channels can connect to an individual server-connection channel. The limit on the number of simultaneous instances of an individual server-connection channel that can be started, regardless of which client started them, prevents any client from exhausting the maximum active channel capacity of the queue manager. However, if you do not also limit the number of simultaneous instances of an individual server-connection channel that can be started from an individual client, then it is possible for a single, faulty client application to open so many connections that it exhausts the channel capacity allocated to an individual server-connection channel, and this prevents other clients that need to use the channel from connecting to it. To avoid this, you must limit the number of simultaneous instances of an individual server-connection channel that can be started from an individual client.

If the value of the individual client limit is reduced below the number of instances of the server-connection channel that are currently running from individual clients, even to zero, then the running channels are not affected. However, new instances of the server-connection channel cannot be started from an individual client that exceeds the new limit until sufficient existing instances from that client have ceased to run so that the number of currently running instances is less than the value of this parameter.

Use the Channels queue manager properties page from the WebSphere® MQ Explorer, or the CHANNELSstanza in the qm.ini file, to specify information about channels.

MaxChannels=100|number

The maximum number of channels allowed. The default is 100.

MaxActiveChannels=MaxChannels_value

The maximum number of channels allowed to be active at any time. The default is the value specified on the MaxChannels attribute.

MaxInitiators=3|number

The maximum number of initiators. The default and maximum value is 3. Any value greater than 3 will be taken as 3.

MQIBindType=FASTPATH|SHARED

The binding for applications:

FASTPATH

Channels connect using MQCONNX FASTPATH; there is no agent process.

SHARED

Channels connect using SHARED.

PipeLineLength=1|number

The maximum number of concurrent threads a channel will use. The default is 1. Any value greater than 1 is treated as 2.

When you use pipelining, configure the queue managers at both ends of the channel to have aPipeLineLength greater than 1.

Note: Pipelining is only effective for TCP/IP channels.

AdoptNewMCA=NO|SVR|SDR|RCVR|CLUSRCVR|ALL|FASTPATH

If WebSphere MQ receives a request to start a channel, but finds that an amqcrsta process already exists for the same channel, the existing process must be stopped before the new one can start. TheAdoptNewMCA attribute allows you to control the end of an existing process and the startup of a new one for a specified channel type.

If you specify the AdoptNewMCA attribute for a given channel type, but the new channel fails to start because the channel is already running:

The new channel tries to stop the previous one by requesting it to end.

If the previous channel server does not respond to this request by the time the AdoptNewMCATimeout wait interval expires, the process (or the thread) for the previous channel server is ended.

If the previous channel server has not ended after step 2, and after the AdoptNewMCATimeout wait interval expires for a second time, WebSphere MQ ends the channel with a CHANNEL IN USEerror.

Note: AdoptNewMCA is not supported on requester channels.

Specify one or more values, separated by commas or blanks, from the following list:

NO

The AdoptNewMCA feature is not required. This is the default.

SVR

Adopt server channels.

SDR

Adopt sender channels.

RCVR

Adopt receiver channels.

CLUSRCVR

Adopt cluster receiver channels.

ALL

Adopt all channel types except FASTPATH channels.

FASTPATH

Adopt the channel if it is a FASTPATH channel. This happens only if the appropriate channel type is also specified, for example, AdoptNewMCA=RCVR,SVR,FASTPATH.

Attention!: The AdoptNewMCA attribute might behave in an unpredictable fashion with FASTPATH channels. Exercise great caution when enabling the AdoptNewMCA attribute for FASTPATH channels.

AdoptNewMCATimeout=60|1 – 3600

The amount of time, in seconds, that the new process waits for the old process to end. Specify a value in the range 1 – 3600. The default value is 60.

AdoptNewMCACheck=QM|ADDRESS|NAME|ALL

The type of checking required when enabling the AdoptNewMCA attribute. If possible, perform all three of the following checks to protect your channels from being shut down, inadvertently or maliciously. At the very least, check that the channel names match.

Specify one or more values, separated by commas or blanks, to tell the listener process to:

Figure 1 shows the general layout of the data and log directories associated with a specific queue manager. The directories shown apply to the default installation. If you change this, the locations of the files and directories are modified accordingly. For information about the location of the product files, see one of the following:

In Figure 1, the layout is representative of WebSphere® MQ after a queue manager has been in use for some time. The actual structure that you have depends on which operations have occurred on the queue manager.Figure 1. Default directory structure (UNIX systems) after a queue manager has been startedDefault directory structure after a queue manager has been started (UNIX systems).

By default, the following directories and files are located in the directory /var/mqm/qmgrs/qmname/ (where qmname is the name of the queue manager).

Table 1. Default content of a /var/mqm/qmgrs/qmname/ directory on UNIX systemsamqalchk.fil Checkpoint file containing information about the last checkpoint.

auth/ Contained subdirectories and files associated with authority in WebSphere MQ prior to Version 6.0.

authinfo/ Each WebSphere MQ authentication information definition is associated with a file in this directory. The file name matches the authentication information definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

channel/ Each WebSphere MQ channel definition is associated with a file in this directory. The file name matches the channel definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

clntconn/ Each WebSphere MQ client connection channel definition is associated with a file in this directory. The file name matches the client connection channel definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

listener/ Each WebSphere MQ listener definition is associated with a file in this directory. The file name matches the listener definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

msem/ Directory containing files used internally.

namelist/ Each WebSphere MQ namelist definition is associated with a file in this directory. The file name matches the namelist definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

plugcomp/ Empty directory reserved for use by installable services.

procdef/ Each WebSphere MQ process definition is associated with a file in this directory. The file name matches the process definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

qmanager/

QMANAGER The queue manager object.QMQMOBJCAT The object catalog containing the list of all WebSphere MQ objects; used internally.

qm.ini Queue manager configuration file.

queues/ Each queue has a directory in here containing a single file called q.

services/ Each WebSphere MQ service definition is associated with a file in this directory. The file name matches the service definition name—subject to certain restrictions; see Understanding WebSphere MQ file names.

By default, the following directories and files are found in /var/mqm/log/qmname/ (where qmname is the name of the queue manager).The following subdirectories and files exist after you have installed WebSphere MQ, created and started a queue manager, and have been using that queue manager for some time.amqhlctl.lfh Log control file.

\qmgrs Contains a folder for each queue manager; the contents of these folders are described in Table 2. Also contains the folder \@SYSTEM\errors,

\tivoli Contains the signature file used by Tivoli®.

\tools Contains all the WebSphere MQ sample programs. These are described in WebSphere MQ for Windows Quick Beginnings.

\trace Contains all trace files.

\uninst Contains files necessary to uninstall WebSphere MQ.

Table 2 shows the directory structure for each queue manager in the c:\Program Files\IBM\WebSphere MQ\qmgrs\ folder. The queue manager might have been transformed as described in Understanding WebSphere MQ file names.

Table 2. Content of a \queue-manager-name\ folder for WebSphere MQ for Windowsamqalchk.fil Contains a checkpoint file containing information about the last checkpoint.

\authinfo Contains a file for each authentication information object.

\channel Contains a file for each channel object.

\clntconn Contains a file for each client connection channel object.

\errors Contains error log files associated with the queue manager:

AMQERR01.LOG AMQERR02.LOG AMQERR03.LOG

AMQERR01.LOG contains the most recent error information.

\listener Contains a file for each listener object.

\namelist Contains a file for each WebSphere MQ namelist.

\Plugcomp Directory reserved for use by WebSphere MQ installable services.

\Procdef Contains a file for each WebSphere MQ process definition. Where possible, the file name matches the associated process definition name, but some characters have to be altered. There might be a directory called @MANGLED here containing process definitions with transformed or mangled names.

\Qmanager Contains the following files:

Qmanager The queue manager object.QMQMOBJCAT The object catalogue containing the list of all WebSphere MQ objects, used internally. Note: If you are using a FAT system, this name is transformed and a subdirectory created containing the file with its name transformed.QAADMIN File used internally for controlling authorizations.

\Queues Each queue has a directory here containing a single file called Q. Where possible, the directory name matches the associated queue name but some characters have to be altered. There might be a directory called @MANGLED here containing queues with transformed or mangled names.

If you run a process in the background, that process can be given a higher nice value (and hence lower priority) by the invoking shell. This might have general WebSphere MQ performance implications. In highly-stressed situations, if there are many ready-to-run threads at a higher priority and some at a lower priority, operating system scheduling characteristics can deprive the lower priority threads of CPU time.

It is strongly recommended that independently started processes associated with queue managers, such as runmqlsr, have the same nice values as the queue manager they are associated with. Ensure the shell does not assign a higher nice value to these background processes. For example, in ksh, use the setting "set +o bgnice" to stop ksh from raising the nice value of background processes. You can verify the nice values of running processes by examining the NI column of a "ps -efl" listing.

It is also recommended that you start WebSphere MQ application processes with the same nice value as the queue manager. If they run with different nice values, an application thread might block a queue manager thread, or vice versa, causing performance to degrade.

The AIX® model for System V shared memory differs from other UNIX platforms, in that a 32-bit process can only attach to 10 WebSphere® MQ memory segments concurrently.A typical 32-bit WebSphere MQ application requires two WebSphere MQ memory segments attached for every connected queue manager. Every additional connected queue manager requires one further WebSphere MQ memory segment attached.Note: During the MQCONN operation an additional shared memory segment is required. In a threaded process where multiple threads are connecting to the same queue manager, you must ensure an additional memory segment is available for every connected queue manager.

Start of changeWebSphere MQ Version 5.3 recommended the use of the environment variable EXTSHM to allow 32-bit applications to connect to more than 10 WebSphere MQ memory segments at a time. With WebSphere MQ Version 6, for 32-bit applications to benefit from EXTSHM facility, both the queue manager and the application need to be started with EXTSHM set in the environment.End of change

When a WebSphere MQ queue manager is ended normally, the queue manager removes the majority of the IPC resources that it was using. A small number of IPC resources remain and this is as designed: some of the IPC resources are intended to persist between queue manager restarts. The number of IPC resources remaining varies to some extent, depending on the operating conditions.End of changeStart of changeThere are some situations when a larger proportion of the IPC resources in use by a queue manager might persist after that queue manager has ended:

If applications are connected to the queue manager when it stops (perhaps because the queue manager was shut down using endmqm -i or endmqm -p), the IPC resources used by these applications might not be released. If the queue manager ends abnormally (for example, if an operator issues the system kill command), some IPC resources might be left allocated after all queue manager processes have terminated.

In these cases, the IPC resources are not released back to the system until you restart (strmqm) or delete (dltmqm) the queue manager. End of change

Start of changeIPC resources allocated by WebSphere MQ are maintained automatically by the allocating queue managers. You are strongly recommended not to perform manual actions on or remove these IPC resources. End of change

Start of changeHowever, if it is necessary to remove IPC resources owned by mqm, follow these instructions. WebSphere MQ provides a utility to release the residual IPC resources allocated by a queue manager. This utility clears the internal queue manager state at the same time as it removes the corresponding IPC resource. Thus, this utility ensures that the queue manager state and IPC resource allocation are kept in step. To free residual IPC resources, follow these steps:End of changeStart of changeStart of change

End the queue manager and all connecting applications. Log on as user mqm. Type the following: On Solaris, HP-UX, and Linux:

/opt/mqm/bin/amqiclen -x -m QMGR

On AIX:

/usr/mqm/bin/amqiclen -x -m QMGR

This command does not report any status. However, if some WebSphere® MQ-allocated resources could not be freed, the return code is nonzero. Explicitly remove any remaining IPC resources that were created by user mqm.

End of changeNote: Start of changeIf a non-mqm application attempted to connect to WebSphere MQ before starting any queue managers, there might still be some WebSphere MQ IPC resources remaining even after following the above steps. These remaining resources were not created by user mqm and there is no straightforward way to reliably recognize them. However, these resources are very small and are reused when WebSphere MQ is next restarted.End of change