Overview

The kind of application I tend to show with MCollective is very request-response orientated. You request some info from nodes and it shows you the data as they reply. This is not the typical thing people tend to do with middleware, instead what they do is create receivers for event streams processing those into databases or using it as a job queue.

The MCollective libraries can be used to build similar applications and today I’ll show a basic use case for this. It’s generally really easy creating a consumer for a job queue using Middleware as covered in my recent series of blog posts. It’s much harder doing it when you want to support multiple middleware brokers, support pluggable payload encryption, different serializers add some Authentication, Authorization and Auditing into the mix and soon it becomes a huge undertaking.

MCollective already has a rich sets of plugins for all of this so it would be great if you could reuse these to save yourself some time.

Request, but reply elsewhere

One of the features we added in 2.0.0 is more awareness of the classical reply-to behaviour common to middleware brokers to the core MCollective libraries. Now every request specifies a reply-to target and the nodes will send their replies there, this is how we get replies back from nodes and if the brokers support it this is typically done using temporary private queues.

But it’s not restricted to this, lets see how you can use this feature from the command line. First we’ll setup a listener on a specific queue using my stomp-irb application.

require'mcollective'require'pp'# where the nagios command socket is
NAGIOSCMD = "/var/log/nagios/rw/nagios.cmd"# to mcollective this is a client, load the client config and# inform the security system we are a clientMCollective::Applications.load_configMCollective::PluginManager["security_plugin"].initiated_by = :client# connect to the middleware and subscribe
connector = MCollective::PluginManager["connector_plugin"]
connector.connect
connector.connection.subscribe("/queue/mcollective.nagios_passive_results")# consume all the things...loopdo# get a mcollective Message object and configure it as a reply
work = connector.receive
work.type = :reply# decode it, this will go via the MCollective security system# and validate SSL etcetc
work.decode!
# Now we have the NRPE result, just save it to nagios
result = work.payload
data = result[:body][:data]unless data[:perfdata] == ""
output = "%s|%s"%[data[:output], data[:perfdata]]else
output = data[:output]end
passive_check = "[%d] PROCESS_SERVICE_CHECK_RESULT;%s;%s;%d;%s"%[result[:msgtime], result[:senderid], data[:command].gsub("check_", ""), data[:exitcode], output]beginFile.open(NAGIOSCMD, "w"){|nagios| nagios.puts passive_check }rescue=> e
puts"Could not write to #{NAGIOSCMD}: %s: %s"%[e.class, e.to_s]endend

This code connects to the middleware using the MCollective Connector Plugin, subscribes to the specified queue and consumes the messages.

You’ll note there is very little being done here that’s actually middleware related we’re just using the MCollective libraries. The beauty of this code is that if we later wish to employ a different middleware or different security system or configure our middleware connections to use TLS to ActiveMQ nothing has to change here. All the hard stuff is done in MCollective config and libraries.

In this specific case I am using the SSL plugin for MCollective so the message is signed so no-one can edit the results in a MITM attack on the monitoring system. This came for free I didn’t have to write any code here to get this ability – just use MCollective.

Scheduling Nagios Checks and scaling them with MCollective

Now that we have a way to receive check results from the network lets look at how we can initiate checks. I’ll use the very awesome Rufus Scheduler Gem for this.

All the checks are loaded, they are splayed a bit so they don’t cause a thundering herd and you can see the schedule is honoured. In my nagios logs I can see the passive results being submitted by the receiver.

MCollective NRPE Scaler

So taking these ideas I’ve knocked up a project that does this with some better code than above, it’s still in progress and I’ll blog later about it. For now you can check out the code on GitHub it includes all of the above but integrated better and should serve as a more complete example than I can realistically post on a blog post.

There are many advantages to this method that comes specifically from combining MCollective and Nagios. The Nagios scheduler visit hosts one by one meaning you get this moving view of status over a 5 minute resolution. Using MCollective to request the check on all your hosts means you get a 1 second resolution – all the load averages Nagios sees are from the same narrow time period. Receiving results on a queue has scaling benefits and the MCollective libraries are already multi broker aware and supports failover to standby brokers which means this isn’t a single point of failure.

Conclusion

So we’ve seen that we can reuse much of the MCollective internals and plugin system to setup a dedicated receiver of MCollective produced data and I’ve shown a simple use case where we’re requesting data from our managed nodes.

Today what I showed kept the request-response model but split the traditional MCollective client into two. One part scheduling requests and another part processing results. These parts could even be on different machines.

We can take this further and simply connect 2 bits of code together and flow arbitrary data between them but securing the communications using the MCollective protocol. A follow up blog post will look at that.

1 Comment

[…] Probably not a surprise but the solution is built on MCollective, it uses the existing MCollective NRPE agent and the existing queueing infrastructure to push the forking to each individual node – they would do this anyway for every NRPE check – and read the results off a queue and spool it into the Nagios command file as Passive results. Internally it splits the traditional MCollective request-response system into a async processing system using the technique I blogged about before. […]