This page describes common administrative procedures related
to balancing. For an introduction to balancing, see
Sharded Cluster Balancer. For lower level information on balancing, see
Cluster Balancer.

New in version 3.0.0: You can also see if the balancer is enabled using
sh.status(). The
currently-enabled field indicates whether
the balancer is enabled, while the
currently-running field indicates if
the balancer is currently running.

The default chunk size for a sharded cluster is 64 megabytes. In most
situations, the default size is appropriate for splitting and migrating
chunks. For information on how chunk size affects deployments, see
details, see Chunk Size.

Changing the default chunk size affects chunks that are processes during
migrations and auto-splits but does not retroactively affect all chunks.

In some situations, particularly when your data set grows slowly and a
migration can impact performance, it is useful to ensure
that the balancer is active only at certain times. The following
procedure specifies the activeWindow,
which is the timeframe during which the balancer will
be able to migrate chunks:

Replace <start-time> and <end-time> with time values using
two digit hour and minute values (i.e. HH:MM) that specify the
beginning and end boundaries of the balancing window.

For HH values, use hour values ranging from 00 - 23.

For MM value, use minute values ranging from 00 - 59.

MongoDB evaluates the start and stop times relative to the time zone
of each individual mongos instance in the sharded
cluster. If your mongos instances are physically located
in different time zones, set the time zone on each server to UTC+-00:00
so that the balancer window is uniformly interpreted.

Note

The balancer window must be sufficient to complete the migration
of all data inserted during the day.

As data insert rates can change based on activity and usage
patterns, it is important to ensure that the balancing window you
select will be sufficient to support the needs of your deployment.

If MongoDB migrates a chunk during a backup, you can end with an inconsistent snapshot
of your sharded cluster. Never run a backup while the balancer is
active. To ensure that the balancer is inactive during your backup
operation:

Set the balancing window
so that the balancer is inactive during the backup. Ensure that the
backup can complete while you have the balancer disabled.

If you turn the balancer off while it is in the middle of a balancing round,
the shut down is not instantaneous. The balancer completes the chunk
move in-progress and then ceases all further balancing rounds.

Before starting a backup operation, confirm that the balancer is not
active. You can use the following command to determine if the balancer
is active:

You can disable balancing for a specific collection with the
sh.disableBalancing() method. You may want to disable the
balancer for a specific collection to support maintenance operations or
atypical workloads, for example, during data ingestions or data exports.

When you disable balancing on a collection, MongoDB will not interrupt in
progress migrations.

When you enable balancing for a collection, MongoDB will not immediately
begin balancing data. However, if the data in your sharded collection is
not balanced, MongoDB will be able to begin distributing the data more
evenly.

Changed in version 3.0.0: The balancer configuration document added configurable
writeConcern to control the semantics of the
_secondaryThrottle option.

The _secondaryThrottle parameter of the balancer and the
moveChunk command affects the replication behavior during
chunk migration. By default,
_secondaryThrottle is true, which means each document move
during chunk migration propagates to at least one secondary before the
balancer proceeds with the next document: this is equivalent to a write
concern of {w:2}.

You can also configure the writeConcern for the
_secondaryThrottle operation, to configure how migrations will
wait for replication to complete. For more information on the
replication behavior during various steps of chunk migration,
see Chunk Migration and Replication.

To change the balancer’s _secondaryThrottle and writeConcern
values, connect to a mongos instance and directly update
the _secondaryThrottle value in the settings
collection of the config database. For
example, from a mongo shell connected to a
mongos, issue the following command:

The effects of changing the _secondaryThrottle and
writeConcern value may not be
immediate. To ensure an immediate effect, stop and restart the balancer
to enable the selected value of _secondaryThrottle. See
Manage Sharded Cluster Balancer for details.

The _waitForDelete setting of the balancer and the
moveChunk command affects how the balancer migrates
multiple chunks from a shard. By default, the balancer does not wait
for the on-going migration’s delete phase to complete before starting
the next chunk migration. To have the delete phase block the start
of the next chunk migration, you can set the _waitForDelete to
true.

By default shards have no constraints in storage size. However, you can set a
maximum storage size for a given shard in the sharded cluster. When
selecting potential destination shards, the balancer ignores shards
where a migration would exceed the configured maximum storage size.

To limit the storage size for a given shard, use the
db.collection.updateOne() method with the $set operator to
create the maxSize field and assign it an integer value. The
maxSize field represents the maximum storage size for the shard in
megabytes.

The following operation sets a maximum size on a shard of 1024megabytes:

This value includes the mapped size of all data files on the
shard, including the local and admin databases.

By default, maxSize is not specified, allowing shards to consume the
total amount of available space on their machines if necessary.

You can also set maxSize when adding a shard.

To set maxSize when adding a shard, set the addShard
command’s maxSize parameter to the maximum size in megabytes. The
following command run in the mongo shell adds a shard with a
maximum size of 125 megabytes: