Starting in MongoDB 3.2, 32-bit binaries are deprecated and will be
unavailable in future releases.

Although the 32-bit builds exist for Linux and Windows, they are unsuitable for
production deployments. 32-bit builds also do not support the
WiredTiger storage engine. For more information, see the 32-bit
limitations page

Changed in version 3.0: Beginning with MongoDB 3.0, MMAPv1 provides
collection-level locking: All collections have a unique
readers-writer lock that allows multiple clients to modify documents
in different collections at the same time.

For MongoDB versions 2.2 through 2.6 series, each database has a
readers-writer lock that allows concurrent read access to a
database, but gives exclusive access to a single write operation per
database. See the Concurrency page for
more information. In earlier versions of MongoDB, all write
operations contended for a single readers-writer lock for the entire
mongod instance.

WiredTiger supports concurrent access by
readers and writers to the documents in a collection. Clients can read
documents while write operations are in progress, and multiple threads
can modify different documents in a collection at the same time.

MongoDB uses write ahead logging to an on-disk journal.
Journaling guarantees that MongoDB can quickly recover write
operations that were written to the journal
but not written to data files in cases where mongod
terminated due to a crash or other serious failure.

Leave journaling enabled in order to ensure that mongod will
be able to recover its data files and keep the data files in a valid
state following a crash. See Journaling for
more information.

Write concern describes the level of
acknowledgement requested from MongoDB for write operations. The level
of the write concerns affects how quickly the write operation returns.
When write operations have a weak write concern, they return quickly.
With stronger write concerns, clients must wait after sending a write
operation until MongoDB confirms the write operation at the requested
write concern level. With insufficient write concerns, write operations
may appear to a client to have succeeded, but may not persist in some
cases of server failure.

See the Write Concern document for more
information about choosing an appropriate write concern level for your
deployment.

Always run MongoDB in a trusted environment, with network rules that
prevent access from all unknown machines, systems, and networks. As
with any sensitive system that is dependent on network access, your
MongoDB deployment should only be accessible to specific systems that
require access, such as application servers, monitoring services, and
other MongoDB components.

MongoDB provides an HTTP interface to check the status of the server
and, optionally, run queries. The HTTP interface is disabled by default. Do
not enable the HTTP interface in production environments.

Avoid overloading the connection resources of a mongod or
mongos instance by adjusting the connection pool size to suit
your use case. Start at 110-115% of the typical number of current database
requests, and modify the connection pool size as needed. Refer to the
Connection Pool Options for adjusting the connection pool size.

The connPoolStats command returns information regarding
the number of open connections to the current database for
mongos and mongod instances in sharded clusters.

MongoDB is designed specifically with commodity hardware in mind and
has few hardware requirements or limitations. MongoDB’s core components
run on little-endian hardware, primarily x86/x86_64 processors. Client
libraries (i.e. drivers) can run on big or little endian systems.

The WiredTiger storage engine is multithreaded and can take advantage
of additional CPU cores. Specifically, the total number of active threads
(i.e. concurrent operations) relative to the number of available CPUs can impact
performance:

Throughput increases as the number of concurrent active operations
increases up to the number of CPUs.

Throughput decreases as the number of concurrent active operations
exceeds the number of CPUs by some threshold amount.

The threshold depends on your application. You can determine the
optimum number of concurrent active operations for your application by
experimenting and measuring throughput. The output from
mongostat provides statistics on the number of active
reads/writes in the (ar|aw) column.

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache
and the filesystem cache.

Changed in version 3.2: Starting in MongoDB 3.2, the WiredTiger internal cache, by
default, will use the larger of either:

60% of RAM minus 1 GB, or

1 GB.

For systems with up to 10 GB of RAM, the new default setting is
less than or equal to the 3.0 default setting (For MongoDB 3.0,
the WiredTiger internal cache uses either 1 GB or half of the
installed physical RAM, whichever is larger).

For systems with more than 10 GB of RAM, the new default setting
is greater than the 3.0 setting.

Via the filesystem cache, MongoDB automatically uses all free memory
that is not used by the WiredTiger cache or by other processes. Data
in the filesystem cache is compressed.

The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger internal
cache. The operating system will use the available free memory
for filesystem cache, which allows the compressed MongoDB data
files to stay in memory. In addition, the operating system will
use any free RAM to buffer file system blocks and file system
cache.

To accommodate the additional consumers of RAM, you may have to
decrease WiredTiger internal cache size.

The default WiredTiger internal cache size value assumes that there is a
single mongod instance per machine. If a single machine
contains multiple MongoDB instances, then you should decrease the setting to
accommodate the other mongod
instances.

If you run mongod in a container (e.g. lxc,
cgroups, Docker, etc.) that does not have access to all of the
RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less
than the amount of RAM available in the container. The exact amount
depends on the other processes running in the container.

When using encryption, CPUs equipped with AES-NI instruction-set
extensions show significant performance advantages.
If you are using MongoDB Enterprise with the
Encrypted Storage Engine, choose a CPU that supports AES-NI for
better performance.

MongoDB has good results and a good price-performance ratio with
SATA SSD (Solid State Disk).

Use SSD if available and economical. Spinning disks can be
performant, but SSDs’ capacity for random I/O operations works well
with the update model of MMAPv1.

Commodity (SATA) spinning drives are often a good option, as the
random I/O performance increase with more expensive spinning drives
is not that dramatic (only on the order of 2x). Using SSDs or
increasing RAM may be more effective in increasing I/O throughput.

Running MongoDB on a system with Non-Uniform Access Memory (NUMA) can
cause a number of operational problems, including slow performance for
periods of time and high system process usage.

When running MongoDB servers and clients on NUMA hardware, you should configure
a memory interleave policy so that the host behaves in a non-NUMA fashion.
MongoDB checks NUMA settings on start up when deployed on Linux (since version
2.0) and Windows (since version 2.6) machines. If the
NUMA configuration may degrade performance, MongoDB prints a warning.

See also

The MySQL “swap insanity” problem and the effects of NUMA
post, which describes the effects of
NUMA on databases. The post introduces NUMA and its goals, and
illustrates how these goals are not compatible with production
databases. Although the blog post addresses the impact of NUMA for
MySQL, the issues for MongoDB are similar.

Then, you should use numactl to start your
mongod instances, including the config servers, mongos instances, and any clients.
If you do not have the numactl command, refer to the documentation for
your operating system to install the numactl package.

The following operation demonstrates how to start a MongoDB instance
using numactl:

For the MMAPv1 storage engine, the method mongod uses
to map files to memory ensures that the operating system will never
store MongoDB data in swap space. On Windows systems, using MMAPv1
requires extra swap space due to commitment limits. For details,
see MongoDB on Windows.

For the WiredTiger storage engine, given sufficient memory pressure,
WiredTiger may store data in swap space.

RAID-5 and RAID-6 do not typically provide sufficient performance to
support a MongoDB deployment.

Avoid RAID-0 with MongoDB deployments. While RAID-0 provides good write
performance, it also provides limited availability and can lead to
reduced performance on read operations, particularly when using
Amazon’s EBS volumes.

With the MMAPv1 storage engine, the Network File System protocol (NFS)
is not recommended as you may see performance problems when both the
data files and the journal files are hosted on NFS. You may experience
better performance if you place the journal on local or iscsi
volumes.

With the WiredTiger storage engine, WiredTiger objects may be stored on
remote file systems if the remote file system conforms to ISO/IEC
9945-1:1996 (POSIX.1). Because remote file systems are often slower
than local file systems, using a remote file system for storage may
degrade performance.

If you decide to use NFS, add the following NFS options to your
/etc/fstab file: bg, nolock, and noatime.

For improved performance, consider separating your database’s data,
journal, and logs onto different storage devices, based on your application’s
access and write pattern. Mount the components as separate filesystems
and use symbolic links to map each component’s path to the device
storing it.

For local block devices attached to a virtual machine instance via
the hypervisor or hosted by a cloud hosting provider, the guest operating system
should use a noop scheduler for best performance. The
noop scheduler allows the operating system to defer I/O scheduling to
the underlying hypervisor.

For physical servers, the operating system should use a deadline
scheduler. The deadline scheduler caps maximum latency per request
and maintains a good disk throughput that is best for disk-intensive
database applications.

MongoDB uses the
GNU C Library
(glibc) if available on a system.
MongoDB requires version at least glibc-2.12-1.2.el6 to avoid a known bug
with earlier versions. For best results use at least version 2.13.

When running MongoDB in production on Linux, you should use Linux
kernel version 2.6.36 or later, with either the XFS or EXT4 filesystem.
If possible, use XFS as it generally performs better with MongoDB.

With the WiredTiger storage engine, use of
XFS is strongly recommended to avoid performance issues that may
occur when using EXT4 with WiredTiger.

With the MMAPv1 storage engine, MongoDB
preallocates its database files before using them and often creates
large files. As such, you should use the XFS or EXT4 file systems. If
possible, use XFS as it generally performs better with MongoDB.

In general, if you use the XFS file system, use at least version
2.6.25 of the Linux Kernel.

If you use the EXT4 file system, use at least version
2.6.28 of the Linux Kernel.

On Red Hat Enterprise Linux and CentOS, use at least version
2.6.18-194 of the Linux kernel.

Set the file descriptor limit, -n, and the user process limit
(ulimit), -u, above 20,000, according to the suggestions in the
ulimit reference. A low ulimit will affect
MongoDB when under heavy use and can produce errors and lead to
failed connections to MongoDB processes and loss of service.

In general, set the readahead setting to 0 unless testing shows a
measurable, repeatable, and reliable benefit in a higher readahead value.
MongoDB Professional Support can provide
advice and guidance on non-zero readahead configurations.

For the MMAPv1 storage engine:

Ensure that readahead settings for the block devices that store the
database files are appropriate. For random access use patterns, set
low readahead values. A readahead of 32 (16 kB) often works well.

For a standard block device, you can run sudoblockdev--report
to get the readahead settings and sudoblockdev--setra<value><device> to change the readahead settings. Refer to your specific
operating system manual for more information.

<path to TLS/SSL libs>/libssl.so.<version>: no version information available (required by /usr/bin/mongod)
<path to TLS/SSL libs>/libcrypto.so.<version>: no version information available (required by /usr/bin/mongod)

These warnings indicate that the system’s TLS/SSL libraries are different
from the TLS/SSL libraries that the mongod was compiled against.
Typically these messages do not require intervention; however, you can
use the following operations to determine the symbol versions that
mongod expects:

Microsoft has released a hotfix for Windows 7 and Windows Server 2008
R2, KB2731284, that repairs a bug
in these operating systems’ use of memory-mapped files that adversely affects
the performance of MongoDB using the MMAPv1 storage engine.

Install this hotfix to obtain significant performance improvements on MongoDB
2.6.6 and later releases in the 2.6 series, which use MMAPv1 exclusively,
and on 3.0 and later when using MMAPv1 as the storage engine.

Configure the page file such that the minimum and maximum page file
size are equal and at least 32 GB. Use a multiple of this size if,
during peak usage, you expect concurrent writes to many databases or
collections. However, the page file size does not need to exceed the
maximum size of the database.

A large page file is needed as Windows requires enough space to
accommodate all regions of memory mapped files made writable during
peak usage, regardless of whether writes actually occur.

The page file is not used for database storage and will not receive
writes during normal MongoDB operation. As such, the page file will not
affect performance, but it must exist and be large enough to
accommodate Windows’ commitment rules during peak database use.

Note

Dynamic page file sizing is too slow to accommodate the rapidly
fluctuating commit charge of an active MongoDB deployment. This can
result in transient overcommitment situations that may lead to
abrupt server shutdown with a VirtualProtect error 1455.

Use Premium Storage.
Microsoft Azure offers two general types of storage:
Standard storage, and Premium storage. MongoDB on Azure has better
performance when using Premium storage than it does with Standard
storage.

For all MMAPv1 MongoDB deployments using Azure,
you must mount the volume
that hosts the mongod instance’s dbPath
with the Host Cache PreferenceREAD/WRITE.
This applies to all Azure deployments running MMAPv1, using any guest operating
system.

If your volumes have inappropriate cache settings, MongoDB may
eventually shut down with the following error:

These shut downs do not produce data loss when
storage.journal.enabled is set to true. You can safely
restart mongod at any time following this event.

The performance characteristics of MongoDB may change with
READ/WRITE caching enabled.

The TCP keepalive on the Azure load balancer is 240 seconds by
default, which can cause it to silently drop connections if the TCP
keepalive on your Azure systems is greater than this value. You
should set tcp_keepalive_time to 120 to ameliorate this problem.

On Linux systems:

To view the keep alive setting, you can use one of the following
commands:

VMWare supports memory overcommitment, where you can assign more memory
to your virtual machines than the physical machine has available. When
memory is overcommitted, the hypervisor reallocates memory between the
virtual machines. VMWare’s balloon driver (vmmemctl) reclaims the
pages that are considered least valuable. The balloon driver resides
inside the guest operating system. When the balloon driver expands,
it may induce the guest operating system to reclaim memory from guest
applications, which can interfere with MongoDB’s memory management and
affect MongoDB’s performance.

You can disable the balloon driver and VMWare’s memory overcommitment
feature to mitigate these problems. However, disabling the balloon driver
can cause the hypervisor to use its swap, as there is no other available
mechanism to perform the memory reclamation. Accessing data in swap
is much slower than accessing data in memory, which can in turn affect
performance. Instead of disabling the balloon driver and memory
overcommitment features, map and reserve the full amount of memory for
the virtual machine running MongoDB. This ensures that the balloon
will not be inflated in the local operating system if there is memory
pressure in the hypervisor due to an overcommitted configuration.

When using MongoDB with VMWare, ensure that the CPU reservation does not
exceed more than 2 virtual CPUs per physical core.

It is possible to clone a virtual machine running MongoDB.
You might use this function to
spin up a new virtual host to add as a member of a replica
set. If you clone a VM with journaling enabled, the clone snapshot will
be valid. If not using journaling, first stop mongod,
then clone the VM, and finally, restart mongod.

KVM supports memory overcommitment, where you can assign more memory
to your virtual machines than the physical machine has available. When
memory is overcommitted, the hypervisor reallocates memory between the
virtual machines. KVM’s balloon driver reclaims the
pages that are considered least valuable. The balloon driver resides
inside the guest operating system. When the balloon driver expands,
it may induce the guest operating system to reclaim memory from guest
applications, which can interfere with MongoDB’s memory management and
affect MongoDB’s performance.

You can disable the balloon driver and KVM’s memory overcommitment
feature to mitigate these problems. However, disabling the balloon driver
can cause the hypervisor to use its swap, as there is no other available
mechanism to perform the memory reclamation. Accessing data in swap
is much slower than accessing data in memory, which can in turn affect
performance. Instead of disabling the balloon driver and memory
overcommitment features, map and reserve the full amount of memory for
the virtual machine running MongoDB. This ensures that the balloon
will not be inflated in the local operating system if there is memory
pressure in the hypervisor due to an overcommitted configuration.

When using MongoDB with KVM, ensure that the CPU reservation does not
exceed more than 2 virtual CPUs per physical core.