Overview

When you host your MongoDB database with mLab, you have access to special tools and processes that help you manage routine MongoDB tasks. Some of these tasks are described here, with detailed instructions and links for further reading.

MongoDB version management

As MongoDB Inc. continues to roll out new versions of MongoDB, it’s important to keep your database up-to-date so as to take advantage of new features, bug fixes, and more.

Versions currently available at mLab

The version of MongoDB that mLab uses by default is currently MongoDB version 3.4; however, you have the option of selecting other versions.

Plan Type

Supported Versions

Sandbox

3.4.x

For-pay Shared

3.4.x, 3.2.x, 3.0.x

For-pay Dedicated

3.4.x, 3.2.x, 3.0.x, 2.6.x

Determining your current MongoDB version

Follow these steps to see which version of MongoDB your deployment is currently running:

Navigate to the MongoDB deployment whose version you want to determine.

At the top of the screen, you will see a box with the connection information; the MongoDB version is indicated in the lower right-hand corner of this box.

Alternately, you can use the db.version() method via the mongo shell to see which version your deployment is running.

> db.version()
3.0.7

How to change MongoDB versions

Not available for Sandbox databases

If you have a for-pay deployment, you can upgrade (or change) the version of MongoDB you are running directly from the mLab management portal. The process is seamless if you are making a replica set connection to one of our Cluster plans.

Prerequisites

We strongly recommend reviewing the following before a release (major) upgrade:

Select the desired version in the drop-down menu that appears below “This deployment is running MongoDB version…”

Read the instructions and requirements carefully before clicking the “Upgrade to…” or “Patch to…” or “Downgrade to…” button.

What to expect

If you have a replica set cluster plan, the entire process should take just a few minutes to complete although some exceptions1 may apply.

We will first restart your non-primary nodes (e.g., arbiter and secondary node(s)) with the binaries for the new version. Then we will intentionally fail you over in order to upgrade your original primary. Finally, we will fail you back over so that your original primary is once again primary. You should experience no downtime if your drivers and client code have been properly configured with a replica set connection. Note that during failover, it may take 5-30 seconds for a new member to be elected primary.

If you are on a single-node plan, your database server will be restarted which usually involves approximately 20 seconds of downtime.

If your Dedicated plan deployment is currently running 3.0.x with the MMAPv1 storage engine, note that an upgrade to 3.2.x with the WiredTiger storage engine will also automatically initiate a rolling node replacement process that will seamlessly migrate your deployment to the WiredTiger storage engine over the course of several hours or even days. Read about mLab’s rolling node replacement process below.

Frequently asked questions

Q. Are for-pay deployments automatically upgraded when mLab supports a new MongoDB version?

Maintenance (minor) versions

We do not automatically patch any for-pay deployments to the latest maintenance/minor version (e.g., 3.2.11 to 3.2.12). Instead, we send out email notifications if maintenance releases have very important bug fixes in them.

That being said, if there were any truly critical issues (e.g., one that would result in data loss), it’s likely that we would automatically patch and send a notification.

Release (major) versions

The only time we will automatically upgrade the MongoDB version on a for-pay deployment is when we de-support the currently-running version.

We typically support at least two release (major) versions on our for-pay Shared plans and three release versions on our Dedicated plans (listed here). Eventually, as release versions are de-supported, an upgrade will be necessary. In those cases, we send multiple notifications well in advance of a mandatory upgrade such as this example notice. If the user doesn’t perform the upgrade at their convenience by the stated deadline, we will automatically perform the version upgrade to our minimum supported version.

Because our Sandbox databases are running on server processes shared by multiple users, version changes are not possible. All Sandbox plans are automatically upgraded to the latest MongoDB version we support. To run on a specific version of MongoDB, you will need to upgrade to one of our for-pay plans which provide your own mongod server process and the flexibility of making version changes at your convenience.

Q. How do I test a specific maintenance (minor) version?

We do not offer the ability to change to a specific maintenance (minor) version. Maintenance versions are supposed to contain only bug fixes and patches and as a result, we don’t consider it necessary to treat these versions (e.g., 3.2.11 and 3.2.12) differently. At any given time, we offer only the latest maintenance version of each release.

That being said, if you are upgrading to a different release (major) version (e.g., 2.6.x vs. 3.0.x), we highly recommend thorough testing in a Staging environment.

Viewing and killing current operations

Not available for Sandbox databases

Although there can be many reasons for unresponsiveness, we sometimes find that particularly long-running and/or blocking operations (either initiated by a human or an application) are the culprit. Some examples of common operations that can bog down the database are:

If you have a Dedicated plan, you can directly get a report on these current operations by running the db.currentOp() method in the mongo shell. In addition, you can use the db.killOp() helper method in the mongo shell to terminate a currently running operation. To do this pass the value of the opid field as an argument to db.killOp().

Don’t hesitate to contact support@mlab.com for help. If you have a Dedicated plan and are in an emergency situation, use the emergency email that we provided to you.

Follow the instructions in the “Warning” window to confirm the restart, then click the “Restart” button.

If you have a replica set cluster with auto-failover, we will use MongoDB’s replSetFreeze command to ensure that your current primary remains primary during the restart. Then we will restart each of your nodes in turn. The entire process could take a few minutes, but you should only lose access to your primary for about 20 seconds.

If you are on a single-node plan, your database server will be restarted which typically involves approximately 20 seconds of downtime.

Sandbox database limitations
Because our Sandbox databases are shared by multiple users, restarting MongoDB on-demand is not possible. If you suspect a restart is required, contact support@mlab.com.

Compacting your database deployment

Sometimes it’s necessary to compact your database in order to reclaim disk space (e.g., are you quickly approaching your storage limits?) and/or reduce fragmentation. When you compact your database, you are effectively reducing its file size.

Understanding file size vs. data size

The fileSize metric is reported under the “Size on Disk” heading in our management console. It is only relevant for deployments running the MMAPv1 storage engine.

Deployments running the MMAPv1 storage engine (including Sandbox and Shared plans) use the fileSize (as opposed to dataSize) value from the output of the dbStats command as the basis for determining whether you are nearing your storage quota. However, when you compare the two metrics, you’ll notice that fileSize is often a much larger value. This is because when MongoDB deletes objects, deletes collections, or moves objects due to a change in size, it leaves “holes” in the data files. MongoDB does try to re-use these holes but they are not freed to the OS.

How to compact your database(s)

If you are on a multi-node, highly available replica set cluster plan (Shared or Dedicated) and would like to try to reclaim disk space, you can do so while still using your database. However, compacting a Sandbox or any single-node plan will require downtime while the compaction is taking place.

Compacting Sandbox and single-node plan deployments

If you are on a Sandbox or single-node plan and would like to try to reclaim disk space, you can use MongoDB’s repairDatabase command.

If your fileSize or “Size on Disk” is under 1.5 GB, you can run this repair command directly through our UI by visiting the page for your database, clicking on the “Tools” tab and selecting “repairDatabase” from the drop-down list. Otherwise, you can run the a db.repairDatabase() after connecting to your database using the mongo shell.

We would also be happy to run this command for you - send your request to support@mlab.com.

The repairDatabase command is a blocking operation. Your database will be unavailable until the repair is complete.

Compacting Shared Cluster plan deployments

The process we use for compactions on Shared Cluster plans is to resync each node from scratch. This is a better method of reclaiming disk space than db.repairDatabase() because when resyncing a secondary member of your replica set, you’ll be able to use the primary member of your replica set.

On the “Databases” tab, note values in the “Size” and “Size on Disk” columns.

If your database’s “Size on Disk” is only a little larger than the “Size” a compaction will have no or little effect. A good rule of thumb is that a compaction is only likely to be effective if the “Size on Disk” is more than 30% larger than the “Size” value.

Navigate to the “Servers” tab.

First click “resync” on the node that’s currently in the state of SECONDARY.

Once the sync is complete, then click “step down (fail over)” on the node that’s currently in the state of PRIMARY.

Finally click “resync” on the node that was primary but is now in the state of SECONDARY.

Your deployment will not have the same level of availability during the maintenance because the node being synced will be unavailable. In addition, backups could be delayed or cancelled while the sync is in progress.

Compacting Dedicated Cluster plan deployments

The process we use to compact a Dedicated Cluster plan deployment is our seamless rolling node replacement process. This is the best method of reclaiming disk space because your deployment will maintain the same level of availability during the process. Read about mLab’s rolling node replacement process below.

Select the failover option in the drop-down menu that appears at the bottom of the “Failover Preference” section.

Click the “Compact” button and confirm that you want to proceed. This will automatically initiate a rolling node replacement to compact your deployment.

Initiating a failover for your cluster

If you would like to force your current primary to step down, you can do so through the mLab management portal. The following instructions are the equivalent of running the rs.stepDown() function in the mongo shell:

From your account’s Home page, navigate to the deployment that needs a failover.

Click the “Servers” tab.

Click the “step down (fail over)” link that appears under the “Manage” column in the row for your current primary.

In the dialog box that appears, click the “Step down” button.

mLab’s rolling node replacement process

If you are on a replica set cluster plan with auto-failover, mLab’s rolling node replacement process will allow you to maintain high availability and keep your existing connection string during scheduled maintenance. If your application/driver is properly configured for replica set connections, you should experience no downtime during this process except during failover.

A Dedicated Cluster plan cannot be downgraded to a Shared Cluster plan using the rolling node replacement process. However, a downgrade from one Dedicated Cluster plan to another Dedicated Cluster plan using this process is both possible and recommended.

What is this process used for?

The rolling node replacement process is most commonly used for:

Upgrading or downgrading plans

Migrating to the WiredTiger storage engine

Compactions

Steps to replace multiple nodes in a cluster

Replacing all electable nodes in a cluster:

For every node to be replaced, mLab will add a new, hidden node to the existing replica set.

To expedite the process, we will use a recent block storage snapshot whenever possible as the basis for this new node.

Wait for the new node to be in the SECONDARY state and in sync with the primary (i.e., no replication lag).

If the node being replaced is currently primary, either you or mLab will intentionally initiate a failover so that your current primary becomes secondary.

mLab will swap out your existing node with the new node, updating DNS records to preserve the connection string.

Expected impact on running applications

The rolling node replacement process is mostly seamless. However, be aware that:

If MongoDB’s initial sync process is necessary for the maintenance event (e.g., during a compaction), the syncing process will add additional read load during the initial, clone phase of the initial sync.

During a failover it may take 5-30 seconds for a primary to be elected. If your application has not been configured with a replica set connection that can handle failover, writes will continue to fail after the new primary is elected. As such, mLab will coordinate with you for the required failover unless you explicitly tell us it’s not necessary (see next section).

MongoDB’s replica set reconfiguration command, replSetReconfig, will be run in an impactful way two times during this process. While this command can sever existing connections and temporarily cause errors in driver logs, these types of disconnects usually have minimal effect on application/drivers that have been configured properly.

Notification and coordination

Swapping out a current secondary:

We will notify you when we swap out your current secondar(ies) with replacement nodes.

Swapping out your current primary:

When your current primary is ready to be swapped out, we will coordinate with you so that you can initiate the required failover at the time that makes the most sense for you and/or your team.

If you know that your application has no trouble handling failover, let us know, and we can initiate the required failover on your behalf immediately before we swap out your current primary.

Additional charges

The extra virtual machines that are used during a rolling node replacement process in order to maintain the same level of availability may incur charges.

Database server processes for deployments with a large number of databases can take significantly longer to start. ↩