Main menu

Tag Archives: rabbitmq

In my last post I introduced Duplog, a small tool that basically forms the glue between rsyslog, RabbitMQ, Redis, and Splunk to enable a highly available, redundant syslog service that deduplicates messages along the way. In order for Duplog to serve the needs of a production enterprise, it will need to perform well. I fully expect the deduplication process to take a toll, and in order to find out how much, I devised a simple benchmark.

One one side, fake syslog generators pipe pairs of duplicate messages into the system as fast as they can. On the other, a process reads the messages out as fast as it can. Both sides report the rate of message ingestion and extraction.

The Details

3 KVM virtual machines on the same host each with the same specs:

1 x Intel Core i5 2400 @ 3.10 GHz

1 GB memory

Virtio paravirtualized NIC

OS: Ubuntu Server 12.04.1 LTS 64-bit

RabbitMQ: 2.7.1

Redis: 2.2.12

Java: OpenJDK 6b27

The Results

(My testing wasn’t very scientific so take all of this with a grain of salt.)

The initial results were a little underwhelming, pushing about 750 messages per second through the system. I originally expected that hashing would be the major bottleneck, or the communication with the Redis server, but each of those processes were sitting comfortably on the CPU at about 50% and 20% usage, respectively. It turned out that the RabbitMQ message brokers were the source of the slow performance.

I began trying many different settings for RabbitMQ, starting by disabling the disk-backed queue, which made little difference. In fact, the developers have basically said as much: “In the case of a persistent message in a durable queue, yes, it will also go to disk, but that’s done in an asynchronous manner and is buffered heavily.”

So then I changed the prefetch setting. Rather than fetching and acknowledging one message at a time, going back and forth over the network each time, the message consumers can buffer a configurable number of messages at a time. It is possible to calculate the optimum prefetch count, but without detailed analytics handy, I just picked a prefetch size of 100.

That setting made a huge difference, as you can see in the histogram. Without spending so much time talking to RabbitMQ, consumers were free to spend more time calculating hashes, and that left the RabbitMQ message brokers more free to consume messages from rsyslog.

Another suggestion on the internet was to batch message acknowledgements. That added another modest gain in performance.

Finally, I tried enabling an unlimited prefetch count size. It is clear that caching as many messages as possible does improve performance, but it comes at the cost of fairness and adaptability. Luckily, neither of those characteristics is important for this application, so I’ve left it that way, along with re-enabling queue durability, whose performance hit, I think, is a fair trade-off for message persistence. I also reconfigured acknowledgements to fire every second rather than every 50 messages. Not only does that guarantee that successfully processed messages will get acknowledged sooner or later, it spaces out ACKs even more under normal operation, which boosted the performance yet again to around 6,000 messages per second.

So is 6,000 messages per second any good? Well, if you just throw a bunch of UDP datagrams at rsyslog (installed on the same servers as above), it turns out that it can take in about 25,000 messages per second without doing any processing. It is definitely reasonable to expect that the additional overhead of queueing and hashing in Duplog will have a significant impact. It is also important to note that these numbers are sustained while RabbitMQ is being written to and read from simultaneously. If you slow down or stop message production, Duplog is able to burst closer to 10,000 messages per second. The queueing component makes the whole system fairly tolerant of sudden spikes in logging.

For another perspective, suppose each syslog message averages 128 bytes (a reasonable, if small, estimate), then 6,000 messages per second works out to 66 GB per day. For comparison, I’ve calculated that all of the enterprise Unix systems in my group at UMD produce only 3 GB per day.

So as it stands now I think that Duplog performs well enough to work in many environments. I do expect to see better numbers on better hardware. I also think that there is plenty of optimization that can still be done. And in the worst case, I see no reason why this couldn’t be trivially scaled out: divide messages among multiple RabbitMQ queues to better take advantage of SMP. This testing definitely leaves me feeling optimistic.

I am working on a new Splunk deployment at work, and as part of that project, I have to build a centralized syslog server. The server will collect logs from all of our systems and a forwarder will pass them along to Splunk to be indexed. That alone would be easy enough, but I think that logs are too important to leave to just one syslog server. Sending copies of the log data to two destinations may allow you to sustain outages in half of the log infrastructure while still getting up-to-the-minute logs in Splunk. I think duplicating log messages at the source is a fundamental aspect of a highly available, redundant syslog service when using the traditional UDP protocol.

That said, you don’t want to have Splunk index all of that data twice. That’ll cost you in licenses. But you also don’t want to just pick a copy of the logs to index—how would you know if the copy you pick is true and complete? Maybe the other copy is more complete. Or maybe both copies are incomplete in some way (for example, if routers were dropping some of those unreliable syslog datagrams). I think the best you can do is to take both copies of the log data, merge them together somehow, remove the duplicate messages, and hope that, between the two copies, you’re getting the complete picture.

I initially rejected the idea of syslog deduplication thinking it to be too complicated and fragile, but the more I looked into it, the more possible it seemed. When I came across Beetle, a highly available, deduplicating message queue, I knew it would be doable.

Beetle Architecture

Beetle itself wouldn’t work for what I had in mind (it will deduplicate redundant messages from single sources; I want to deduplicate messages across streams from multiple sources), but I could take its component pieces and build my own system. I started hacking on some code a couple of days ago to get messages from rsyslog to RabbitMQ and then from RabbitMQ to some other process which could handle deduplication. It quickly turned into a working prototype that I’ve been calling Duplog. Duplog looks like this:

Duplog Architecture

At its core, Duplog sits and reads messages out of redundant RabbitMQ queues, hashes them, and uses two constant-time Redis operations to deduplicate them. RabbitMQ makes the whole process fairly fault tolerant and was a great discovery for me (I can imagine many potential use cases for it besides this). Redis is a very flexible key-value store that I’ve configured to act as a least-recently-used cache. I can throw hashes at it all day and let it worry about expiring them.

One important design consideration for me was the ability to maintain duplicate messages within a single stream. Imagine you have a high-traffic web server. That server may be logging many identical HTTP requests at the same time. Those duplicates are important to capture in Splunk for reporting. My deduplication algorithm maintains them.

Looking at the architecture again, you will see that almost everything is redundant. You can take down almost any piece and still maintain seamless operation without dealing with failovers. The one exception is Redis. While it does have some high availability capability, it relies on failover which I don’t like. Instead, I’ve written Duplog to degrade gracefully. When it can’t connect to Redis, it will allow duplicate messages to pass through. A few duplicate messages isn’t the end of the world.

Feel free to play around with the code, but know that it is definitely a prototype. For now I’ve been working with the RabbitMQ and Redis default settings, but there is probably a lot that should be tuned, particularly timeouts, to make the whole system more solid. I also intend to do some benchmarking of the whole stack (that’ll probably be the next post), but initial tests on very modest hardware indicate that it will handle thousands of messages per second without breaking a sweat.