Server monitoring that doesn't suck.

MongoDB Monitoring: Keep in it RAM

There are a number of built in tools and commands which can be used to get important information from MongoDB but because it is relatively new, it can be difficult to know what you need to be doing from an operational perspective to ensure that everything runs smoothly.

The first and most obvious thing to note is that keeping everything in RAM is faster. But what does that actually mean and how do you know when something is in RAM?

In every case, having something in memory is going to be faster than not. However, that’s not always feasible if you have massive data sets. Instead, you want to make sure you always have enough RAM to store all the indexes.

The MongoDB console provides an easy way to look at the data and index sizes. The db.stats() command will analyse the database and give you a range of statistics. The output is provided in bytes for dataSize and indexSize. It may take a few seconds for this command to return for large databases and in the most recent versions of MongoDB, it will not block.

Here we can see we have around 51GB of data and 19GB of indexes. This means we’d need at least 20 GB of RAM for just indexes and 72GB of RAM for both data and indexes.

For larger data sets like this, a good rule is to ensure you have enough memory for the working set. You define your own working set by looking at the collections you know you want to be kept in RAM and ensuring that there is sufficient RAM for them. You can use the db.collectionName.stats() command on each individual collection to determine its total size.

There’s no way to tell MongoDB which collections you want to prioritise for memory but it is smart about its memory management so it will keep commonly accessed data in RAM where possible.

Although not the only reason, a slow query does indicate insufficient memory. This might be that you’ve not got the most optimal indexes for a query but if indexes are being used and it’s still slow, it could be because of a disk i/o bottleneck because the data isn’t in RAM. Doing an explain on the query will show you what indexes it is using.

How you’ll know – 2) cursor timeouts

cursor timed out (20000 ms)

These slow queries will obviously cause a slowdown in your app but they may also cause timeouts. In the PHP driver a cursor will timeout after 20,000ms by default, although this is configurable (details).

How you’ll know – 3) disk i/o spikes

You’ll see write spikes during normal operations because MongoDB syncs data to disk periodically, but if you’re seeing read spikes then that can indicate MongoDB is having to read the data files rather than accessing data from memory. Be careful though because this won’t distinguish between data and indexes, or even other server activity. Read spikes can also occur even if you have little or no read activity if the mongod is part of a cluster where the slaves are reading from the oplog.

Monitoring disk i/o is easy with a tool like iostat or our own server monitoring service, Server Density.

Subscribe for updates

Free DevOps articles and tutorials.

Can we get an air five?

Thanks for your support, you're awesome. Wondering who's behind all of this wordplay? It's the Server Density team, we're making a server monitoring tool that's simple to install, easy to use and mindblowingly comprehensive.