You can only modify the cloud provider backing your cluster
when you upgrade from an Atlas M0 Free Tier or M2/M5 Shared
Tier cluster to a larger cluster. Transitioning to a different provider
changes your cluster connection string. Consider scheduling a
maintenance window to update your applications with the new connection
string to resume connectivity to the cluster. Atlas migrates data to
the new cluster.

Note

You cannot modify the cloud provider of M10 or larger dedicated
clusters. If you wish to use a different cloud provider for a dedicated
cluster, create a new cluster and use the
Atlas Live Migration service to migrate data from the original
cluster to the new one:

The time required for an initial sync and resynchronizing data
across storage volumes increases linearly with the amount of data in
the cluster. Making changes to a cluster often requires migrating to
new servers and storage volumes, which can take a long time. However,
cluster changes can be made more quickly in some cases. For example,
changing the instance size of a cluster on AWS does not require an
initial sync.

Changes to Free/Shared Tier clusters result in limited downtime

All changes to M0, M2, and M5 clusters require 7-10 minutes of downtime.

To maximize availability:

For a replica set, Atlas migrates one node at a time, starting
with the secondary nodes first and then the primary.

For a sharded cluster, Atlas performs the migration of the shards
independently of each other. For each shard (i.e. replica set),
Atlas migrates one node at a time, starting with the secondary
nodes first and then the primary.

Retryable writes should prevent any
write errors during the election of a new primary. On average, an
election can take five seconds.

Migration can affect performance if your primary is already reaching
operational capacity: each newly migrated replica set node must
perform an initial sync from the primary, adding to the
operational load. Migrations can also affect performance if
read preferences are set to read from
secondaries: the replica set is down one secondary during the
migration.

If the workload on the Atlas cluster is such that it impedes
operations, including the ability to scale, MongoDB Atlas may, in
some situations, create indexes in your cluster as a safeguard.

Atlas does not guarantee that host names remain consistent with
respect to node types during topology changes.

Example

If you have a cluster named foo123 containing an analytics
node foo123-shard-00-03-a1b2c.mongodb.net:27017, Atlas does
not guarantee that specific host name will continue to refer to an
analytics node after a topology change, such as
scaling a cluster to modify its
number of nodes or regions.

You can only modify the cloud provider backing your cluster
when you upgrade from an Atlas M0 Free Tier or M2/M5 Shared
Tier cluster to a larger cluster. Transitioning to a different provider
changes your cluster connection string. Consider scheduling a
maintenance window to update your applications with the new connection
string to resume connectivity to the cluster. Atlas migrates data to
the new cluster.

You cannot modify the cloud provider of M10 or larger dedicated
clusters. If you wish to use a different cloud provider for a dedicated
cluster, create a new cluster and use the
Atlas Live Migration service to migrate data from the original
cluster to the new one:

The number of availability zones, zones, or fault domains in a region has no affect on the number of MongoDB
nodes Atlas can deploy. MongoDB Atlas clusters are always made of
replica sets with a minimum of three MongoDB nodes.

The choice of cloud provider and region affects the
configuration options for the available instance sizes, network
latency for clients accessing your cluster, and the
cost of running the cluster.

To configure additional cluster options, toggle
Select Multi-Region, Workload Isolation, and Replication Options (M10+ clusters)
to Yes. Use these options to add cluster nodes in
different geographic regions with different workload priorities,
and direct application queries to the appropriate cluster nodes.

AWS Only

If this is the first M10+ dedicated paid cluster for the
selected region or regions and you plan on creating one or more
VPC peering connections, please review the documentation
on VPC Peering Connections before
continuing.

The following options are available when configuring cross-region
clusters:

The first row lists the Highest Priority region.
Atlas prioritizes nodes in this region for primary
eligibility. For more information on priority in replica
set elections, see Member Priority.

Click Add a region to add a new row for region
selection and select the region from the dropdown. Specify the
desired number of Nodes for the region. The total
number of electable nodes across all regions in the cluster must
be 3, 5, or 7.

Backup Data Center Location

If this is the first cluster in the project and you
intend to enable
continuous snapshot backups,
Atlas selects the backup data center location for the
project based on the geographical location of the
cluster’s Highest Priority region. To learn more
about how Atlas creates the backup data center, see
Fully Managed Backup Service.

When selecting a Region, regions marked as
Recommended provide higher availability compared to
other regions. For more information, see:

Each node in the selected regions can participate in replica set
elections, and can become the primary
as long as the majority of nodes in the replica set are available.

You can improve the replication factor of single-region clusters
by increasing the number of Nodes for your
Highest Priority region. You do not have to add
additional regions to modify the replication factor of your
Highest Priority region.

To remove a region, click the
trash icon
icon next to that
region. You cannot remove the Highest Priority region.

Atlas provides checks for whether your selected cross-regional
configuration provides availability during partial or whole
regional outages. To ensure availability during a full region
outage, you need at least one node in three different regions. To
ensure availability during a partial region outage, you must have
at least 3 electable nodes in a Recommended region
or at least 3 electable nodes across at least 2 regions.

Read-only nodes for optimal local reads

Use read-only nodes to optimize local reads in
the nodes’ respective service areas.

Click Add a region to select a region in which to
deploy read-only nodes. Specify the desired number of
Nodes for the region.

Read-only nodes cannot provide high availability because they
cannot participate in elections, or become the
primary for their cluster. Read-only nodes have
distinct read preference tags
that allow you to direct queries to desired regions.

To remove a read-only region, click the
trash icon
icon
next to that region.

Analytics nodes for workload isolation

Use analytics nodes to isolate
queries which you do not wish to contend with your operational
workload. Analytics nodes are useful
for handling data analysis operations, such as reporting queries from
BI Connector for Atlas. Analytics nodes have distinct
read preference tags which allow you
to direct queries to desired regions.

Click Add a region to select a region in which to
deploy analytics nodes. Specify the desired number of
Nodes for the region.

Analytics nodes cannot participate in
elections, or become the primary for
their cluster.

To remove an analytics node, click the
trash icon
icon
next to that region.

Note

Having a large number of regions or having nodes spread across
long distances may lead to long election times or replication lag.

Important

For a given region in an Atlas project with multi-region clusters
or clusters in multiple regions, you cannot have more than 40 MongoDB
nodes on all other regions in that project. This limit applies
across all cloud service providers.

For example, if an Atlas project has 20 nodes in RegionA and 20 nodes
in RegionB, you can deploy no more than 20 additional nodes in that
project in any given region. This limit applies even if RegionA and
RegionB are backed by different cloud service providers.

For Atlas projects where every cluster is deployed to a single region, you
cannot create a multi-region cluster in that project if there are already 40
or more nodes in that single region.

Sandbox replica set clusters for getting started with MongoDB.
These instances deploy to a shared environment with access to a
subset of Atlas features and functionality. For complete
documentation on shared cluster limits and restrictions,
see Atlas M0 (Free Tier), M2, and M5 Limitations.

Atlas provides an option to deploy
one M0 Free Tier replica set per project. You can
upgrade an M0 Free Tier cluster to an
M2+ paid cluster at any time.

M2 and M5 instances are low-cost shared starter clusters
with the same features and functionality as M0, but with
increased storage and the ability to deploy into a subset of
regions on Amazon Web Service (AWS), Google Cloud Platform (GCP),
and Microsoft Azure.

Beta

Support for M2 and M5 clusters is available as a Beta
feature. These clusters do not yet support backups.

Atlas supports shared cluster deployment in a subset of
Cloud Providers and Regions. Atlas greys out any
shared cluster instance sizes not supported by the selected
cloud service provider and region. For a complete list of
regions that support shared cluster deployments, see:

Instances that support development environments and low-traffic
applications.

These instances support replica set deployments only, but otherwise
provide full access to Atlas features and functionality.

Dedicated Production Clusters

Instances that support production environments with high traffic
applications and large datasets.

These instances support replica set and sharded cluster
deployments with full access to Atlas features and
functionality.

Some instances have variants, denoted by the ❯ character.
When you select these instances, Atlas lists the variants
and tags each instance to distinguish their key characteristics.

NVMe Storage on AWS

For applications which require low-latency and high-throughput IO,
Atlas offers storage options on AWS which leverage
locally attached ephemeral NVMeSSDs. The following instance sizes
have an NVMe option, with the size fixed at the cluster tier:

For cluster tiers up to and including M40, Atlas enforces a
50:1 ratio of disk storage to RAM to facilitate consistent
performance of clusters with large datasets. For cluster tiers of
M50 and higher, the enforced ratio is 100:1.

Example

To support 3 TB of disk storage you must select a cluster tier
with at least 32 GB of RAM (M50 or higher).

Atlas running with MongoDB 3.2 must upgrade to MongoDB 3.4
before upgrading to MongoDB 3.6.

See Atlas Major Version Change Procedure for the
MongoDB-recommended procedure for a major version change. This
procedure includes creating a staging cluster for the purpose of
testing and validating application and cluster performance and
functionality on the new MongoDB version.

Atlas deploys each shard
as a three-node replica set, where each node deploys using the
configured Cloud Provider & Region,
Cluster Tier, and Additional Settings.
Atlas deploys one mongod per shard node.

For cross-region clusters, the number of nodes per shard
is equal to the total number of electable and read-only nodes across
all configured regions. Atlas distributes the shard nodes across
the selected regions.

Atlas deploys the config servers
as a three-node replica set. The config servers run on
M30 instances.

For cross-region clusters, Atlas distributes the config server
replia set nodes to ensure optimal availability. For example,
Atlas might deploy the config servers across three distinct
availability zones and three distinct regions if supported by
the selected cloud service provider and region configuration.

Atlas deploys one mongos router for each
node in each shard. For cross-region clusters, this allows clients
using a MongoDB driver to connect to the geographically “nearest”
mongos.

To calculate the number of mongos
routers in a cluster, multiply the number of shards by the number of
replica set nodes per shard.

You cannot convert a sharded cluster deployment to a replica set
deployment.

For details on how the number of server instances affect cost, see
Number of Servers.

For more information on sharded clusters, see Sharding
in the MongoDB manual.

Atlas only allows one backup method per project. Once you
select a backup method for a cluster in a project, Atlas
locks the backup service to the chosen method for all subsequent
clusters in that project.

For example, in a project where one or more clusters use
continuous backups, you cannot enable cloud provider snapshots for any cluster
in that project.

To change the backup method for the project, disable backups for all
clusters in the project, then re-enable backups using your preferred
backup methodology. Atlas deletes any stored snapshots when you
disable backup for a cluster.

To enable backups for the Atlas cluster, toggle
Turn on Backup (M10 and up) to Yes.
If enabled, Atlas takes snapshots of your databases at
regular intervals and retains them according to your project’s
retention policy.

The backup option chosen for the first cluster in a project
dictates the backup option for all other subsequent clusters in the
project. See Fully Managed Backup Service for more information.

Atlas takes incremental snapshots of data in your cluster
and allows you to restore from stored
snapshots or from a selected point in time within the last 24
hours. You can also
query a continuous backup snapshot.

Each project has one backup data center location dictated by
the first backup-enabled cluster created in that project. See
Snapshot Storage Location for more
information.

Atlas takes full copy snapshots of data in your cluster
and allows you to restore from those snapshots. Atlas
stores snapshots in the same cloud provider region as the
replica set member targeted for snapshots.

You can disable backups for the cluster by toggling this option to
No. Once you disable backup, Atlas immediately
deletes any backup snapshots for the cluster. See
Fully Managed Backup Service for more information.

When using a readPreference of "analytics",
Atlas places BI Connector for Atlas on the same hardware
as the analytics nodes from which BI Connector for Atlas reads.

By isolating electable data-bearing nodes from the
BI Connector for Atlas, electable nodes do not compete for resources
with BI Connector for Atlas, thus improving cluster reliability
and performance.

For high traffic production environments, connecting to the
Secondary Node(s) or Analytics Node(s) may
be preferable to connecting to the Primary Node.

For clusters with one or more
analytics nodes, select
Analytics Node to isolate BI Connector for Atlas queries from
your operational workload and read from dedicated, read-only
analytics nodes. With this option, electable nodes do not compete
for resources with BI Connector for Atlas, thus improving cluster reliability and
performance.

The BI Connector generates a relational schema by
sampling data from MongoDB. The
following sampling settings are configurable:

BI Connector Option

Type

Description

Schema Sample Size

integer

Optional. The number of documents that the BI Connector
samples for each database when gathering schema information.
For more information, see the
BI Connector documentation.

Sample Refresh Interval

integer

Optional. The frequency, in seconds, at which the BI
Connector re-samples data to recreate the schema. For more
information, see the
BI Connector documentation.

If you want to switch from one Encryption at Rest provider on your
cluster to another, you must first disable Encryption at Rest for
your cluster, then re-enable it with your desired Encryption at
Rest provider. See Encryption at Rest Using Your Key Management.

Atlas encrypts all cluster storage and snapshot volumes,
ensuring the security of all cluster data at rest
(Encryption at Rest). Atlas
ProjectOwners can configure
an additional layer of encryption on their data at rest using the
MongoDB
Encrypted Storage Engine
and their Atlas-compatible Encryption at Rest provider.

To enable Atlas Encryption at Rest for this cluster,
toggle Encryption At Rest with WiredTiger Encrypted Storage Engine (M10 and up)
to Yes.

Atlas Encryption at Rest using your Key Management supports
M10 or greater replica set clusters backed by
AWS or
Azure only. Support for clusters deployed
on Google Cloud Project (GCP) is in development. Atlas Encryption
at Rest supports encrypting Cloud Provider Snapshotsonly.
You cannot enable Encryption at Rest on a cluster using
Continuous Backups.

Atlas clusters using Encryption at Rest using your Key Management
incur an increase to their hourly run cost. For more information on
Atlas billing for advanced security features, see
Advanced Security.

Important

If Atlas cannot access the Atlas project key management
provider or the encryption key used to encrypt a cluster, then
that cluster becomes inaccessible and unrecoverable. Exercise
extreme caution before modifying, deleting, or disabling an
encryption key or key management provider credentials used by
Atlas.

You can configure the following mongod runtime options
on M10+ paid tier clusters:

Set Oplog Size
asterisk icon

Modify the oplog size of the cluster. For sharded
cluster deployments, this modifies the oplog size of each
shard in the cluster. This option corresponds to modifying
the replication.oplogSizeMB
configuration file option for each mongod in the
cluster.

You can check the oplog size by connecting to your cluster
via the mongo shell and authenticating as a user
with the Atlasadmin role. Run the
rs.printReplicationInfo()
method to view the current oplog size and time.

Warning

Reducing the size of the oplog requires removing data from the
oplog. Atlas cannot access or restore any oplog
entries removed as a result of oplog reduction. Consider the
ramifications of this data loss before reducing the oplog.

Enforce Index Key Limit

Enable or disable enforcement of the 1024-byte index key limit.
Documents can only be updated or inserted if, for all
indexed fields on the target collection, the corresponding index
entries do not exceed 1024 bytes. If disabled,
mongod writes documents that breach the limit
but does not index them. This option corresponds to
modifying the
failIndexKeyTooLong
parameter via the setParameter command for each
mongod in the cluster.

Allow Server-Side JavaScript

Enable or disable execution of operations that perform server-side
execution of JavaScript. This option corresponds to modifying
the security.javascriptEnabled configuration file option
for each mongod in the cluster.

Set Minimum TLS Protocol Version
asterisk icon

Sets the minimum TLS version the cluster accepts for incoming
connections. This option corresponds to configuring the
net.ssl.disabledProtocols configuration file option
for each mongod in the cluster.

TLS 1.0 Deprecation

For users considering this option as a method for enabling the
deprecated Transport Layer Security (TLS) 1.0 protocol version, please read
What versions of TLS does Atlas support? before proceeding. Atlas
deprecation of TLS 1.0 improves your security of data-in-transit
and aligns with industry best practices. Enabling TLS 1.0 for any
Atlas cluster carries security risks. Consider enabling TLS
1.0 only for as long as required to update your application stack
to support TLS 1.1 or later.

Require Indexes for All Queries

Enable or disable the execution of queries that require a collection
scan to return results. This option corresponds to modifying the
notablescan parameter via the
setParameter command for each mongod in
the cluster.

asterisk icon
Atlas performs uses a rolling deployment process
to apply modifications to these options. For sharded clusters,
this involves a rolling restart of all shards and the config
server replica set. To learn more about how Atlas supports
high availability during maintenance operations, see
How does MongoDB Atlas deliver high availability?.

For replica sets, the data-bearing servers are the servers hosting the
replica set nodes. For sharded clusters, the data-bearing servers are the
servers hosting the shards. For sharded clusters, Atlas also deploys
servers for the config servers; these are
charged at a rate separate from the instance costs.