In MQ 9.0.1 we introduced the MQ Service Provider for z/OS Connect. This is compatible with MQ for z/OS queue managers that are version 8 or later and is discussed here.

The blog referenced used a stock check workload as an example use of the z/OS Connect feature, but how does this compare with a client performing the same request?

In order to answer this question, we created 2 configurations:

A client able to send HTTP requests to a WAS Liberty server that was configured for a z/OS connect ‘2-way’ service, or to borrow more traditional parlance a request/reply workload, into a z/OS queue manager where the queues were served by batch server tasks. The partner machine uses curl to drive the HTTP request into the Liberty profile which then connects to the z/OS queue manager in bindings mode to put the message to a request queue. This request queue is monitored by batch server tasks that get the message, process the data in the request message and generate a reply message to be put to the named reply queue, which is subsequently returned to the partner machine.

An MQ client application able to connect to a z/OS queue manager via a SVRCONN channel to drive workload on the batch server tasks. The client application is configured to connect to the queue manager, open the request and reply queues, put the request message, get-wait the reply message, close the queues and disconnect from the queue manager.

The MQ Client application is deliberately acting in a less optimal manner to best simulate the lack of a long-term persistent connection between the REST API and Liberty server.

Measurement variation:

The measurements performed run in a variety of configurations which involved:

VariableMessage Sizes:

The request message used was typically 50 bytes or less and contained the size of the desired reply message.

The reply messages generated were small (1KB), medium (64KB) and large (512KB).

Variable number of clients:

Measurements were run with between 10 and 50 requester tasks.

Variable number of queues:

When workload was distributed over multiple sets of queues, there was no significant difference in the performance observed. Typically, the transaction rates were not high enough to reach queue limits. As such, this report discusses the performance when using 1 pair of request and reply queues.

Configuring the MQ Service Provider for z/OS Connect:

In addition, we configured a two-way service in the server.xml by adding a number of workloads with specific queues, for example workload1 used request queue LQ1001 and reply queue LIXC01 as demonstrated in the snippet below:

Specialty processors on z/OS:

The WAS Liberty server hosting the MQ Service provider for z/OS connect is a java-based product and as such is able to exploit zIIP specialty processors. The measurements run were monitored via RMF to determine how much of the work could be offloaded to zIIP.

The measurements using the WAS Liberty server show costs when:

No zIIP processors are available.

All eligible workload is offloaded to zIIP. The IIPHONORPRIORITY=NO option for the parmlib member IEAOPTxx can ensure that zIIP-eligible work will run on the zIIP unless is it necessary to resolve contention for resources with non-zIIP processor eligible work.

The RMF Workload report suggested that 98% of the WAS Liberty costs were zIIP-eligible. In reality, unless the IIPHONORPRIORITY=NO option is specified, some work may be run on general purpose processors if there are insufficient zIIP processors available.

Measurements:

The costs reported are primarily for the WAS Liberty server and the MQ channel initiator with the intent to demonstrate the differences. The MQ queue manager and batch application server costs are largely similar regardless of how the message arrives on the request queue.

Small messages

The following table shows the cost of the workload that was run on general purpose processors by the MQ channel initiator or the WAS Liberty server.

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

MQ Channel Initiator

REST API

WAS Liberty Server

(no zIIP offload)

REST API

WAS Liberty Server

(zIIP offload)

10

896

8200

164

20

940

8077

162

30

981

7954

159

40

1010

7907

158

50

1040

8180

163

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

The costs of z/OS Connect when zIIP processors are not available is between 8 to 10 times higher than the client connection.

This is dramatically reduced when zIIPs are available, resulting in the chargeable cost being 15-20% of the client connection. It should be noted that this does mean that any zIIP processors will see increased usage.

Achieved Transaction Rate

The transaction rate for the z/OS Connect workload increased from 72 to 150 transactions per second as the number of requesters increased from 10 to 50. With more requesters, the throughput did continue to increase at a similar rate.

By contrast the client performance peaked at 2500 transactions per second with 50 clients.

Medium messages

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

MQ Channel Initiator

REST API

WAS Liberty Server

(no zIIP offload)

REST API

WAS Liberty Server

(zIIP offload)

10

920

11415

342

20

962

10060

302

30

1005

9338

280

40

1026

9343

280

50

1037

9277

278

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

The costs of the client connection increased little when comparing the costs of the small and medium sized reply messages whereas the z/OS Connect workload saw an increase of 13-37%, suggesting the WAS Liberty server was more sensitive to the size of the message payload.

Achieved Transaction Rate

The transaction rate for the z/OS Connect workload increased from 65 to 144 transactions per second as the number of requester tasks increased from 10 to 50.

By contrast the client performance peaked at 1275 transactions per second with 50 clients and was hitting network limits for this workload due to bandwidth and latency.

Large messages

Transaction Cost in MQ Channel initiator v WAS Liberty Server

Requesters

Client connection

MQ Channel Initiator

REST API

WAS Liberty Server

(no zIIP offload)

REST API

WAS Liberty Server

(zIIP offload)

10

1218

19620

589

20

1233

19310

579

30

1253

19420

583

40

1207

19515

585

50

1256

19909

597

Note: Costs shown are CPU microseconds per transaction, with measurements run on z13 with 3 dedicated processors running z/OS V2R2.

As with medium sized messages, the use of large messages with the client application saw a small increase. In this configuration the increase was approximately 300 microseconds per transaction.

The z/OS connect workload increased by 8 CPU milliseconds per transaction, which when offloaded resulted in a chargeable increase of 250 microseconds per transaction.

When offloading all possible work to zIIP, the chargeable cost for the z/OS Connect workload was approximately half that of the client workload.

Achieved Transaction Rate

The client connection was rapidly constrained by network bandwidth, even with just 10 clients.

Does Advanced Message Security have an impact?

Enabling the SPLCAP=YES option on a z/OS queue manager, regardless of a policy being applied to the queues, will make a difference to the MQ Client measurement. We observed an increase of approximately 0.3 CPU milliseconds to the cost of each transaction as a result of explicit client look-up for a queue policy.

When the queue manager was configured to have SPLCAP=YES in the z/OS Connect configuration but no policies were defined, the transaction cost was not affected.

If policies were to be defined, there would be an increase similar to those seen in the MQ for z/OS V901 performance report.

Conclusions

There will be situations where z/OS Connect is a viable alternative to an MQ Client from a usability perspective – in all measurements, the round-trips had sub-second response times. Many modern scripting languages, such as node.js, Swift and Go, with no native MQ client have rich support for REST API processing.

It should be noted that there is certain functionality that the MQ Client possesses that z/OS Connect cannot replicate, such as transactional considerations, which may influence the decision of which configuration to use.

The MQ Client application was able to process more transactions per second, scaling well until the network bandwidth limits were approached. The z/OS connect measurements show a less aggressive increase in throughput, but was able to continue scaling with more requester tasks, until also hit by network constraints.

In an environment where network latency is high, the MQ client performance may drop as there are a number of flows between the client and the server. It may be that the REST API is less impacted by network latency as there are typically less flows between the requester and the WAS Liberty server.

The costs observed in the client configuration can be significantly reduced if the client is able to connect once, potentially open the queues once, then process multiple messages before closing queues and disconnecting. As a guide, more than 70% of the small message cost in the MQ channel initiator is related to MQCONN and MQOPEN – an overhead which rises further when SSL/TLS encryption is used on channels.

Further cost savings can be made in the MQ Client configuration by suppressing the CSQX511I and CSQX512I messages using the “SET SYSTEM EXCLMSG(X511,X512)” command. This was of the order of 150 microseconds per transaction. To put this into context, if these X511/X512 messages were suppressed, the small message transaction cost reduces to 770 microseconds, compared with the REST API cost of 342 microseconds (based on all eligible code running on zIIP).

Whilst the costs of the z/OS Connect workload when offloaded to zIIP is attractive compared to the MQ Client, this means there is additional load on the zIIPs. For example, the small message workload table reports that 342 microseconds was not eligible, meaning that 11703 CPU microseconds was eligible. If all of that eligible work was run on specialty processors, 90 transactions per second would occupy an entire zIIP processor. By contrast with the MQ Client running, it would take a rate of 1100 transactions per second to fully utilise a single general purpose CPU.

Queue manager stores the client connection channel information in client channel definition table (CCDT). This table is updated whenever a client connection channel is defined or altered. MQ client applications uses the client channel definition table (CCDT) to determine the channel definitions and authentication information to connect to a queue manager. For a client application to use the CCDT , it must either be copied to the client machine or the file should be placed in a location from where the client can access it. However, there is no traditional file system available on the appliance. Hence the file needs to be copied to a directory from where it can be downloaded.

Here are the steps to do it.

Run the below command on the appliance to create the required MQ objects and a user account under which the client connection will be established.

The above command creates a channel definition table file in the MQ data directory. On Linux the default location is /var/mqm and on windows the default location is C:\ProgramData\IBM\MQ. This file can then be used for connecting to the appliance queue manager.

MQ Client channel definition tables (also known as CCDT's) have been part of MQ for as long as I can remember, they provide a way of configuring advanced client connections to queue managers. The simplest client connections can of course be configured with just a channel name and connection name for example with the MQSERVER environment variable, for example;

export MQSERVER=SYSTEM.DEF.SVRCONN/TCP/myhost.sample.com(1414)

This then provides MQ with enough information for an application to connect over TCP/IP to a single remote queue manager at 'myhost' on port 1414, via channel SYSTEM.DEF.SVRCONN. If you need to go beyond a simple connection such as this, then you'll need a different solution. If any of the following apply;

Need to enable or tweak other non-default settings such as channel compression

Need to provide workload balancing of connections without network sprayer

Need to define more than one queue manager connection

...then a client channel definition table is one way to meet these requirements.

The client channel definition table is implemented as a file, by default this is called AMQCLCHL.TAB and it is held in a binary format. The file would normally be copied from where it was defined to where the client applications are running. Traditionally the client channel definition table has always been generated by the queue manager and updated every time a channel type of CLTCN is created, altered or deleted. From MQ V8 the ability to create, alter, delete and view the contents of the file was added to clients through runmqsc running in a non-connected mode (runmqsc -n). Although the client-side runmqsc capabilities address some of the issues involved with maintaining client connection definitions, it doesn't solve one of the major issues, that is, how to push out channel definition changes to all of the clients that will use it ?

Until now, MQ administrators would need to coordinate pushing out new copies of the CCDT to each client, or, host the file on a networked filesystem and mount this on every client. JMS and fully managed .NET client applications have also been able to locate a CCDT file by specifying a local file or a ftp or http resource using the CCDTUrl property on a connection factory. For all native client applications (C/C++, COBOL, RPG) the file had to be accessible via a local or mounted network filesystem, however this is no longer true.

MQV9 adds the same capability to native (C/C++, COBOL & RPG) and unmanged .NET applications to 'pull' the CCDT from a URL, whether that be a local file, ftp or http resource. The default caching behavior of MQ clients is that a CCDT file will only be pulled down if the file modification time is different to the last time it was retrieved. As with most client configuration options, there are a variety of ways in which the URL location can be provided;

If you wanted to use this support with ftp or http, then that still means you would need to host the CCDT file on a server, but with the new V9 support this means that all of your client applications could automatically pick up changes to channel definitions without manually pushing out updates or needing to mount a networked filesystem on each client.

Another in the series of bitesize blog posts about features in MQ V8. Check out the whole series here.

Have you ever needed to roll out an MQ client application across your enterprise, or indeed to a business/trading partner or other 3rd party ? You'll probably be familiar with the challenges involved in ensuring the MQ application runtime environment can locate the right level of MQ client runtime to operate correctly.

Just as with other software stacks, certification of an MQ application in development/pre-production will generally test a very specific level of maintenance of the runtime environment, it can be difficult to ensure that when the application is deployed into a production environment that it will use exactly the same levels of runtime that were validated. How can you ensure that the right installation options and levels of MQ are installed in the application environments ?

With MQ V8.0.0.4, native redistributable client runtime libraries are provided for Windows and Linux x86-64 platforms to make it simple to distribute both applications and the required MQ runtime libraries. A third package (not specific to platform) is available containing the runtime files required for the Java/JMS applications, including the MQ resource adapter for JMS applications running under an application server.

What is the MQ redistributable client ?

The MQ redistributable client is a collection of runtime files from MQ that are provided in a zip/tar file that can be distributed to 3rd parties under redistributable license terms. To put this another way, the MQ redistributable client provides a simple way of distributing your applications and the MQ runtime files that they require in a single package.

What can I do with the MQ restributable client ?

The MQ redistributable client provides all of the runtime files required to run the following;

Native MQ applications using the MQI written in C, C++, COBOL

MQ applications using the Java/JMS classes

MQ applications using managed or unmanaged .NET classes

The runtime supports all of the usual features of an MQ client, for example including support for SSL/TLS and AMS. All of the administration and problem determination tooling that you would expect to find in a client install are also provided, for example runmqsc.

You can if you wish use the genmqpkg tool (provided in the redistributable client), to create a runtime distribution tailor made for your application. For example, your application needs TLS support but not COBOL, C++ or .NET ? The genmqpkg tool can help create a compact package with the minimal set of files needed.

What can I not do with the MQ redistributable client ?

The MQ redistributable client is a runtime distribution package only, that means that it contains the files that will allow your applications to run, together with a handful of useful command lines tools for administration and problem determination. The MQ redistributable client however lacks any software development features such as header files, copybooks or sample source code. If you want to write or recompile an MQ application then a full MQ installation will be required to do this. The MQ redistributable client does not provide other non-MQ runtime resources, such as Microsoft Visual Studio runtime, .NET or Java runtime environments.

How do I get a redistributable client ?

The MQ redistributable client packages are available from FixCentral here.

Instructions on how to unpack the client libraries is provided in the Knowledge Center here.

MQ V8 introduced a feature called Connection Authentication which allows applications to pass a user ID and password across to the queue manager to be used to authenticate that the application is allowed to connect to the queue manager.

We recognised that many applications would need to be changed to pass in the user ID and password, and so as a short term mitigation to that we supplied a client side security exit called mqccred that could send the user ID and password without any changes being needed to the application. You can read more about this exit in Bitesize Blogging: MQ V8 - mqccred exit.

If you've read my SlideShare presentation on MQ V8 Security you'll know that this no-code-change provision of a user ID and password can be done by either a pre-connect exit or a client side security exit. If you have need to write your own facility rather than using mqccred - perhaps because you want to pull passwords from somewhere other than a flat file on the client machine - then which should you use?

In this post, I want to provide our reasoning for choosing a security exit rather than a pre-connect exit for the exit we supplied to help you make the decision of which to use in your own environment for this, or other reasons in the future.

Requirement

Pre-Connect Exit

'C' Client-side Security Exit

Java Client-side Security Exit

Able to add/change the contents of the MQCSP

Suitable for use with clients on V7.5 / V7.1

Suitable for use with clients on V7.0 / V6.0

Suitable for use with C Clients

Suitable for use with Java Clients (Java/JMS)

Suitable for use with non-managed .NET clients

Suitable for use with fully managed .NET clients

So we chose to supply a 'C' language client-side security exit because it covered the most environments with a single item.

Folks, check out the awesome Chris Matthewson show you how to diagnose SSL client connection to MQ Queue Manager failures. It's a great watch and really worth while if you are looking to secure your connections with SSL...

Another in the series of bitesize blog posts about features in MQ V8. Check out the whole series here.

If you want to make use of the new User ID and Password Authentication feature in MQ V8 and not all of your client applications send a user ID or password there is a new security exit shipped with MQ V8 called mqccred that you can use. mqccred provides a user ID and password to a client application that is then sent to MQ and, if configured, authenticated.

Everything you need can be found in <<installation directory>>\Tools\c\Samples\mqccred\ on Windows and <<installation directory>>/samp/mqccred on Unix.

Setting up the user IDs and passwords

The mqccred.ini file contains your user ID and password information. By default it is expected that this file is located in $HOME/.mqs/mqccred.ini. If you would like to locate it elsewhere you can use the environment variable MQCCRED to point at it:

MQCCRED=C:\mydir\mqccred.ini

You can provide a user ID and password for all queue managers or for each individual queue manger. This is an example of an mqccred.ini file:

The individual queue manager definitions take precedence over the global setting. For a queue manager you can also override a user ID and password that is explicitly supplied by an application by using the Force=TRUE attribute. The default for all queue managers is FALSE.

QueueManager: Name=QMB User=user3 Password=passw0rd3 Force=TRUE

Protecting the mqccred.ini file

Since this file contains password information it should be protected. First you should restrict user access by removing all unnecessary permissions. Next, you can use the runmqccred program to obfuscate the passwords. This will remove the plaintext password attributes and replace them with the OPW attribute.

QueueManager: Name=QMA User=user2 OPW=95E485A0FD0CE8AA

If the file permissions are not secure enough runmqccred will produce this message:

Configuration file 'C:\Users\User1\.mqs\mqccred.ini' is not secure.Other users may be able to read it. No changes have been made to the file.Use the -p option for runmqccred to bypass this error.

You can bypass this issue with the -p flag but the exit will fail to run when put into production if you have not resolved this issue. When runmqccred runs successfully it will inform you how many passwords have been obfuscated.

Another in the series of bitesize blog posts about features in MQ V8. Check out the whole series here. With the introduction of OS and LDAP authentication in MQ v8 we now allow MQ administrators to specify whether inbound connections should supply credentials in order to be able to connect. The different levels of security are:

NONE - MQ will not engage with the authentication service and any attempts to supply credentials on connections will be ignored by the queue manager (although still passed to any security exits that you may have).

OPTIONAL - MQ will only verify credentials if they are supplied. If no credentials are supplied then the connection will pass the credential verification stage.

REQDADM - MQ mandates that anyone connecting who is a member of the mqm group must supply a valid user id and password, if the user connecting is not a member of the mqm group then OPTIONAL rules are applied.

REQUIRED - MQ mandates that all connections must supply a valid user id and password.

At first glance it seems that the above connection rules are set for all connections to MQ regardless of where a connection was initiated, however in addition to the CHCKLNT field on the QMGRs AUTHINFO objects there is also a CHCKCLNT field on CHLAUTH rules. By changing the CHCKLNT value on CHLAUTH rules you are able to control the level of security for inbound client connections depending on which CHLAUTH rule they match.

Note: As CHLAUTH rules only apply to network attached connections, you can only configure multiple security levels for client applications connecting via the network and not via local bindings. Also you are only able to raise the security of the QMGR not lower it.

Using the following command you can specify a different CHCKCLNT value that the QMGR's CHCKCLNT value when creating (or modifying) a CHLAUTH rule.

ASQMGR - Use the same level of security as the QMGR, which must be set to a minimum level of OPTIONAL.

REQDADM - See above description

REQUIRED - See above description

By supplying a higher CHCKCLNT value on your CHLAUTH rule than your QMGR you can specify stricter security levels for different connections. This is useful if you have areas of your network that require additional security. For Example if you have a external IP addresses that connect to your internal MQ then you can force these connections to supply valid credentials to add an extra layer of security to your system while allowing internal connections to connect without credentials.

For more information on OS and LDAP connection authentication see here.

In the example below we will see how to set up a MQ QMGR to force certain connections to supply credentials while allowing other connections to connect without credentials.

Example of how to set up CHLAUTH rules with CHCKCLNT

In this example we will assume that I have installed MQ successfully and I have the necessary permissions to make all changes to MQ. As an MQ administrator I want to ensure that my copy of MQ is secure so that no-one can access it without permission. Let's assume that there are 3 different IP addresses that are authorised to connect to MQ, however one of them requires additional security. For simplicity, let's also assume that IP addresses represent a single entity and cannot be spoofed.

1) First I set my QMGR to use an AUTHINFO that has CHCKCLNT(OPTIONAL) set. (Note: I cannot set this to NONE and then raise the security using CHLAUTH as this will cause all connections to fail with 2063 as by setting a QMGR's CHCKLNT to NONE you are stating that we should not be doing any credential authentication at all. By setting a QMGR to NONE and then setting a CHLAUTH to REQUIRED or REQDADM you would be creating a conflict and therefore a configuration error within MQ.):

3) Next I create a simple SVRCONN CHANNEL called "CONNECT" which is where I want clients to connect:

> DEFINE CHANNEL('CONNECT') CHLTYPE(SVRCONN)

4) Because I only want connections to come through the CONNECT channel and because I only want connections from certain IP addresses to connect I will create the following CHLAUTH rule to block all connections. (Later we will create the rules to allow access):

Now my MQ is secure so that only client connections from the local machine, 192.168.0.7 can connect without credentials and client connections from 192.168.1.567 can connect as long as they supply valid credentials.

Great news! With the announcement yesterday of WebSphere MQ 7.5 came the news that the WebSphere MQ Extended Transactional Client (XTC) is now provided free of charge. Not only is this true in the new WMQ v7.5, but it applies to all supported versions of WebSphere MQ client across all platforms.

Previously, XTC was an optional component of WebSphere MQ Server and it never had a unique part number. To obtain the client required purchasing a WebSphere MQ Server license for the server where the client was to be deployed, and then installing just the client components. Since the cost was the same as a full queue manager on the same host, many customers choose to install the full queue manager and skip the client. Although this provides the highest reliability and is a great solution architecturally, it can get a bit expensive if the network was designed to take advantage of the free MQ clients and so some customers chose to forgo XA functionality altogether.

The new pricing (or lack thereof!) removes cost as a consideration when designing transactional applications based on WebSphere MQ clients. Obviously, continue to use a local queue manager where it makes sense. But in those cases where the client model is the best fit and XA transactions are required, there is no longer any barrier to using the Extended Transactional Client and no need to upgrade to get that functionality.

There was also a bit of confusion over the license terms due to the lack of a unique part number. As a result, some shops, however well intentioned, deployed the XA client out of license compliance. The problem in these cases is finding out about the licensing charge only after deploying an extensive number of clients. The good news here is that the pricing change extends to all versions of the Extended Transactional Client, against all versions of WebSphere MQ, including existing deployments effective yesterday, April 24th.

Obviously, it will take a while for the change to be reflected in packaging. For now continue to install XTC the same as before but keep an eye on the Infocenter for more on this topic as it evolves.