Atlas can perform a live migration of a source sharded cluster to an
Atlas sharded cluster, keeping the cluster in sync with the remote
source until you cut your applications over to the Atlas cluster.
Once you reach the cutover step in the following procedure, you should
stop writes to the source cluster by stopping your application instances,
pointing them to the Atlas cluster, and restarting them.

Note

You cannot target a Global Cluster as the
destination for Live Migration.

To begin, click on the ellipsis … button and choose
Migrate Data to this Cluster from the dropdown menu.

Note

On the Cluster list, the ellipsis … button appears beneath the
cluster name, as shown below. When you view a cluster’s details, the ellipsis
… appears on the right-hand side of the screen, next to the
Connect and Configuration buttons.

Atlas Live Migration process streams data through a
MongoDB-controlled application server. Atlas provides the IP ranges
of the MongoDB Live Migration servers during the Live Migration
process. Grant these IP ranges access to your source cluster to allow
connectivity to the MongoDB Live Migration server.

Atlas only allows connections to a cluster from entries in the project’s
whitelist. You must add IP addresses such as
application servers to the project whitelist manually. Do this before beginning
the migration procedure.

Atlas temporarily adds the IP address of the Atlas migration
servers to the project whitelist. During the migration procedure, you
cannot edit or delete this entry. Atlas removes the entry
automatically once the procedure completes.

The source cluster has the same feature compatibility version
and major MongoDB version as the destination cluster. The major
MongoDB verison is the first two digits of the full version,
e.g. 3.2.x, 3.4.x, or 3.6.x.

To check the feature compatibility version of a host in the
source cluster, run the following command from
the Mongoshell:

If your MongoDB deployment contains indexes with keys which exceed the
index key limit, you must
set the MongoDB server parameter failIndexKeyTooLong
to false before starting the Live Migration procedure.

Note

Modifying indexes so that they contain no oversized keys is
preferable to setting the failIndexKeyTooLong server
parameter to false. See the server manual
for strategies on dealing with oversized index keys.

When configuring the destination Atlas cluster, consider the
following:

The live migration process streams data through a MongoDB-managed
application server. Each server runs on infrastructure hosted in the
nearest region to the source cluster. The following regions are
available:

Europe

Ireland

Frankfurt

London

Americas

Eastern US

Western US

APAC

Sydney

Due to network latency, the live migration process may not be able to
keep up with a source cluster that has an extremely heavy write load.
In this situation, you can still migrate directly from the source
cluster by pointing the mongomirror
tool to the destination Atlas cluster.

The live migration process may not be able to keep up with a source
cluster whose write workload is greater than what can be transferred
and applied to the destination cluster. You may need to scale the
destination cluster up to an instance with more processing power,
bandwidth or disk IO.

You cannot target a Global Cluster as the
destination for Live Migration.

Important

You cannot modify the destination Atlas cluster once you
start the live migration procedure. If you need to scale up
the destination cluster, first
cancel the live
migration procedure, then scale up the cluster and restart
the live migration procedure.

Atlas does not migrate any user or role data to the destination
cluster.

If the source cluster enforced authentication, you must re-create the
credentials used by your applications on the destination Atlas
cluster. Atlas uses
SCRAM for user
authentication. See Configure MongoDB Users for a tutorial
on creating MongoDB users in Atlas.

You can cancel the process at any time by clicking Cancel.
Atlas displays the
Sharded Cluster Live Import in Progress
message for the destination cluster until the cluster is ready for
normal access.

If you cancel the live migration procedure before completion,
Atlas does not remove any data migrated up to that point.
If you restart the live migration procedure using the same
Atlas cluster as the destination, Atlas wipes all data
from the cluster.

Consider performing a partial live migration procedure first to
create a staging environment before repeating the procedure to create
your production environment. The procedure documented below provides
a callout for the appropriate time to cancel the procedure and create
a staging environment.

Use the staging environment to test
application behavior and performance using the latest
driver version that
supports the MongoDB version of the destination Atlas cluster.
Then, repeat the live migration proceedure in full to transition
your applications from your source cluster to the Atlas
destination cluster.

Important

Avoid making changes to the source cluster configuration while the
Live Migration procedure runs, such as removing replica set members
or modifying mongod runtime settings
like featureCompatibilityVersion.

Click the ellipsis … button for the destination
Atlas cluster. On the Cluster list, the ellipsis
… button appears beneath the cluster name, as shown below.
When you view a cluster’s details, the ellipsis … appears
on the right-hand side of the screen, next to the Connect and
Configuration buttons.

Click Migrate Data to this Cluster.

Atlas displays a walk-through screen with instructions on
how to proceed with the live migration. Prepare the information as stated
in the walk-through screen, then click I’m Ready To Migrate.

Atlas displays a walk-through screen that collects information
required to connect to the source cluster.

Atlas displays the IP address of the MongoDB
application server responsible for your live migration at the
top of the walk-through screen. Configure your source cluster
firewall to grant access to the displayed IP address.

Enter the hostname and port of any mongos
of the source sharded cluster
into the provided text box. For example,
mongos.example.net:27017.

If the source cluster enforces authentication, enter a username and
password into the provided text boxes.

If the source cluster uses TLS/SSLand is not using a public
Certificate Authority (CA), copy the contents of the source cluster’s
CA file into the provided text box.

Click Validate to confirm that Atlas can connect to the
source cluster.

If validation fails, check that:

You have granted the Live Migration servers
network access
on your source cluster firewall.

The provided user credentials, if any, exist on the source cluster
and have the required permissions.

The SSL toggle is enabled only if the source cluster
requires it.

The CA file provided, if any, is valid and correct.

The provided hostnames are valid and reachable over the public
internet.

Click Start Migration to start the migration process.

Atlas displays the live migration progress in the UI. During
live migration, you cannot view metrics nor access data for the
destination cluster.

Atlas displays the progress of live
migration, including the time remaining for the destination cluster
to catch up to the source cluster.

Click View Progress per Shard to view the sync progress
and migration time remaining per shard. If the initial sync
process for a given shard fails, you can try to restart the
sync by clicking Restart.

When the migration timer and the Start Cutover button
turn green, proceed to the next step.

When Atlas detects that the source and destination clusters are nearly
in sync, it starts an extendable 72 hour timer to begin the cutover
procedure. If the 72 hour period passes, Atlas stops synchronizing with
the source cluster. You can extend the time remaining by 24 hours by
clicking the Extend time hyperlink below the <time>
left to cut over timer.

Important

The cutover procedure requires stopping your application and
all writes to the source cluster. Consider scheduling and
announcing a maintenance period to minimize interruption of service
on the dependent applications.

Once you are prepared to cut your applications over to the
destination Atlas cluster, click Start Cutover.

Atlas displays a walk-through screen with instructions
on how to proceed with the cutover. The optime gap
displays how far behind the destination cluster is compared to the
source cluster. You must stop your application and all writes
to the source cluster to allow the destination cluster to close
the optime gap.

Perform the steps described in the walk-through screen to cut over
your applications to the Atlas cluster. The walk-through screen
provides the cluster connection string your applications must
use to connect to the Atlas cluster.

Staging Migration

If you are creating a staging environment for the purpose of
testing your applications against, note the
optime gap to identify how far behind your
staging environment will be compared to your source cluster.

Press Cancel to cancel the live migration.
Atlas terminates the migration at that point in time,
leaving any migrated data in place. Atlas displays the
Sharded Cluster Live Import in Progress
message for the destination cluster until the cluster is ready
for normal access. See Canceling Live Migration for
more information on cancelling a live migration procedure.

Once the cancellation complete, you can test your staging
application against the partially migrated data.

Click I’m Done when you have completed the cutover
sequence and updated your applications to point at the
service cluster. The optime gap must be 0:00 before
you can complete the procedure.

Atlas automatically prepares the Atlas cluster once
you complete the cutover sequence. During this time, you cannot
access the Atlas cluster. Atlas displays the status of the
cluster configuration in the UI.

Once Atlas displays the cluster as active and ready, you can
point your applications at the Atlas cluster and begin
performing write operaitons.

Important

Write operations issued to the source cluster after the
cutover sequence are not mirrored to the destination
Atlas cluster. Double check that your applications
are pointed at the new Atlas cluster before restarting them.