Saturday, December 31, 2011

MongoDB's Write Lock

MongoDB, as some of you may know, has a process-wide write lock. This has caused some degree of ridicule from database purists when they discover such a primitive locking model. Now per-database and per-collection locking is on the roadmap for MongoDB, but it's not here yet. What was announced in MongoDB version 2.0 was locking-with-yield. I was curious about the performance impact of the write lock and the improvement of lock-with-yield, so I decided to do a little benchmark, MongoDB 1.8 versus MongoDB 2.0.

Page Faults and Working Sets

MongoDB uses the operating system's virtual memory system to handle writing data back to disk. The way this works is that MongoDB maps the entire database into the OS's virtual memory and lets the OS decide which pages are 'hot' and need to be in physical memory, as well as which pages are 'cold' and can be swapped out to disk. MongoDB also has a periodic process that makes sure that the database on disk is sync'd up with the database in memory. (MongoDB also includes a journalling system to make sure you don't lose data, but it's beyond the scope of this article.)

What this means is that you have very different behavior depending on whether the particular page you're reading or writing is in memory or not. If it is not in memory, this is called a page fault and the query or update has to wait for the very slow operation of loading the necessary data from disk. If the data is in memory (the OS has decided it's a 'hot' page), then an update is literally just writing to system RAM, a very fast operation. This is why 10gen (the makers of MongoDB) recommend that, if at all possible, you have sufficient RAM on your server to keep the entire working set (the set of data you access frequently) in memory at all times.

If you are able to do this, it turns out that the global write lock really doesn't affect you. Blocking reads for a few nanoseconds while a write completes turns out to be a non-issue. (I have not measured this, but I suspect that the acquisition of the global write lock takes significantly longer than the actual write.) But this is all kind of hand-waving at this point. I did end up measuring what happens to your query rate as you do more and more (non-faulting) writes and found very little effect.

2.0 to the Rescue?

So the real problem happens when you're doing a write, you acquire the lock, and then you try to write and fault while holding the lock. Since a page fault can take around 40,000 times longer than a nonfaulting memory operation, this obviously causes problems. In MongoDB version 2.0 and higher, this is addressed by detecting the likelihood of a page fault and releasing the lock before faulting. This allows other reads and writes to proceed while the data is brought in from disk. This is what I'm really trying to measure here; what is the effect of a so-called "write lock with yield" as opposed to a plain old "write lock?"

Benchmark Setup

First off, I didn't want to worry about all the stuff I normally run on my laptop to get in the way, and I'd like to make the benchmark as reproducible as possible, so I decided to run it on Amazon EC2. (I have made the code I used available here - particularly fill_data.py, lib.py, and combined_benchmark.py if you'd like to try to replicate the results. Or you can just grab my image ami-63ca1e0a off AWS.) I used an m1.large instance size instance running Ubuntu server with ephemeral storage used for the database store. I also reformatted the ephemeral disk from ext3 to ext4 and mounted it with noatime for performance purposes. MongoDB is installed from the 10gen .deb repositories as version 1.8.4 (mongodb18-10gen) and 2.0.2 (mongodb-10gen).

The database used in the test contains one collection of 150 million 112-byte documents. This was chosen to give a total size significantly in excess of the 7.5 GB of physical RAM allocated to the m1.large instances. (The collection size with its index takes up 21.67 GB in virtual memory). Journalling has been turned *off* in both 1.8 and 2.0.

What I wanted to measure is the effect of writes on reads that are occurring simultaneously. In order to simulate such a load, the Python benchmark code uses the gevent library to create two greenlets. One is reading randomly from its working set of 10,000 documents as fast as possible, intended to show non-faulting read performance. The second greenlet attempts to write to a random document of the 150 million total documents at a specified rate. I then measure the achieved write and read rate and create a scatter plot of the results.

Results

First, I wanted to show the effect of non-faulting writes on non-faulting reads. For this, I restrict writes to occur only in the first 10,000 documents (the same set that's being read). As expected, the write lock doesn't degrade performance much, and there's little difference between 1.8 and 2.0 in this case.

The next test shows the difference between 1.8 and 2.0 in fairly stark contrast. We can see an immediate drop-off in read performance in the presence of faulting writes, with virtually no reads getting through when we have 48 (!) writes per second. In contrast, 2.0 is able to maintain around 60% of its peak performance at that write level. (Going much beyond 48 faulting writes per second makes things difficult to measure, as you're using a significant percentage of the disk's total capacity just to service your page faults).

Conclusion

The global write lock in MongoDB will certainly not be mourned when it's gone, but as I've shown here, its effect on MongoDB's performance is significantly less than you might expect. In the case of non-faulting reads and writes (the working set fits in RAM), the write lock degrades performance only slightly as the number of writes increases, and in the presence of faulting writes, the lock-with-yield approach of 2.0 mitigates most of the performance impact of the occasional faulting write.

Regarding "hot" and "cold" pages, you guys could really improve performance on FreeBSD if you'd stop using msync() and start using fsync(). There was a discussion I partook in a few weeks ago with a user complaining about horrible performance in MongoDB on FreeBSD which resulted in analysis of your use of msync(), which is obviously intended for Linux. Given that you call msync() with the entire mapped region, you're effectively calling fsync() -- and the user changed your code to do exactly that and saw a tremendous speed improvement. Kernel developers commented as well. I urge you guys to consider improving this too. The thread in question:

http://www.mail-archive.com/freebsd-stable@freebsd.org/msg118225.html

And the performance discovery after I recommended the user try using fsync() instead (but I recommend you read the entire thread):

@charsyam - no, the ephemeral disk is the non-EBS volume. I used it so that network latency wouldn't affect the benchmarks, as there have been reports of EBS volumes giving inconsistent performance. In a real MongoDB deployment, you'd need to either a) use EBS or b) use replica sets to achieve durability on EC2, but for the benchmark I thought it was the best option available.

@felipe: If you can keep your working set in RAM, then MongoDB should have no problem at all scaling writes. One way you can do this is by using MongoDB's auto-sharding feature, specifically built to scale writes. In that case, you have a write lock per *shard*, not per *database*, plus you have more RAM (split over your shard servers), so I'd say you should be able to scale an ebay like site. You might want to get a consultation with 10gen to make sure you size everything appropriately, though.

Very good and interesting article. Comments have added more insight to this. One question: I know MongoDB supports different write consistency models like fire and forget, safe and replica_safe. On what model these tests were carried out? (I understand it may not be replica_safe) What will happen if the write consistency model is changed?

Since I was trying to measure the effect of writes on reads, I used the 'fire and forget' model. I did, however, call getLastError after the last write to make sure they had all completed, and the time elapsed from the beginning of the writes to the last one completed was used in the 'writes per second' calculation.

Great data, this was very interesting. But please note the different scales of the X-axis between the two graphs! That's a bit misleading.

Before this change Mongo was mandatory READ-ONLY for data sets larger than RAM. (Note the drops to 0 queries/sec). After the change, you now see "just" a 30% performance loss due to 50 write-faults / sec.

It's only an 'improvement' relative to what existed before. 50 writes / sec is a very low load to trade for 1000 reads/sec. This means Mongo is still essentially READ-ONLY for data sets larger than RAM.

Sorry, didn't mean to mislead with the graphs -- I just wanted to show what the effect of the write lock was in the absence of faulting for the first graph.

As to making the array "read-only", that's a little misleading. Looking at this post http://victortrac.com/EC2_Ephemeral_Disks_vs_EBS_Volumes we see that a 4x RAID-0 array on aws ephemeral storage gets approx. 165 random seeks per second. If we scale that down to 1 ephemeral disk (which I was using), we are looking at 40-50 random seeks per second (and my benchmark was calculated to cause a random seek for during a page fault for most of the writes). This means that the *theoretical* maximum number of random page faults per second is 40-50 for the hardware I was using.

In other words, we were saturating the disk bandwidth due to a pathological benchmark setup. The point was to see how this "worst-case scenario" would affect read throughput, and as the graphs show, there was indeed a problem in 1.8 since the write lock was being held during all those expensive random seeks.

tl;dr: the disk can't handle >50 random seeks per second, so that's the max write fault rate we can even consider for the benchmark.

PS: Your comment about "READ-ONLY for data sets larger than RAM" is only true if you have writes faulting a majority of the time. In most real-world scenarios, your working set (data you frequently access) is *significantly* smaller than your total data set.

The only thing we really care about here is the working set, so it's true that MongoDB becomes *much* slower if your RAM is insufficient to hold your working set, but then again, that's the case with any SQL-based solution as well. And 10gen has *always* recommended that your RAM be sufficient to contain your working set. And if it does, your performance looks like the first graph, regardless of whether your *total* data set fits in RAM or not.

What we have found is that if you have to do an update in a collection that is doing bulk update of documents or a single document with an array of considerable size and you have an index on that array - the write lock is held for 99.9% and never yeilded -- This affects the read performance ..

I don't know if this kind of scenario was tried in this test case to see the write lock woes

This scenario wasn't tried, and it really falls outside of the problem that the 2.0 write lock improvements addresses, as most of the documents you were updating were likely already in RAM.

I will point out, however, that during bulk updates the write lock *is* yielded, which is why you saw 99.9% write lock and not 100.0%. So reads were making *some* progress.

I would be interested in seeing a detailed analysis of your scenario so we can quantify just how much bulk updates *do* slow reads (and also to put it in context with other SQL and NoSQL databases). If you have code that can reproduce that behavior, I'd love to see it.

Search

Loading...

Useful Resources

Interested in practical MongoDB programming?

MongoDB Applied Design Patternsis available now, both in ebook and dead-tree form. In it, you'll see how to use MongoDB effectively in fields from real-time analytics to content management systems and more. The examples are all in Python, so readers of this blog should have no problem picking it all up.

Want to learn MongoDB using Python?

I just released an 84-page ebook MongoDB with Python and Ming to help you get started. In it, I cover everything from installing MongoDB for the first time, basic pymongo usage, MongoDB aggregation including MapReduce and the new aggregation framework, and GridFS. You'll also learn about Ming, the object-document mapper we built at SourceForge to accelerate our development beyond what we could do with PyMongo.

Pages

Rick's Resources

I'm collecting a list of products and services I've found useful in my work & leisure Python programming at Rick's Resources. If you're interested in that sort of thing, I'd love to have you check it out!