Field Names

Avoid the following characters in field names because they require extra
escaping:

. period

[ left bracket

] right bracket

* asterisk

` backtick

Indexes

Avoid using too many indexes. An excessive number of indexes can increase
write latency and increases storage costs for
index entries.

Be aware that indexing fields with monotonically increasing values, such as
timestamps, can lead to hotspots which impact
latency for applications with high read and write rates.

Index exemptions

For most apps, you can rely on automatic indexing as well as any error message
links to manage your indexes. However, you may want to add
single-field exemptions
in the following cases:

Case

Description

Large string fields

If you have a string field that often holds long string values that
you don't use for querying, you can cut storage costs by exempting the field
from indexing.

High write rates to a collection containing documents with sequential values

If you index a field that increases or decreases sequentially between
documents in a collection, like a timestamp, then the maximum write rate to the
collection is 500 writes per second. If you don't query based on the field with sequential values, you can exempt the field
from indexing to bypass this limit.

In an IoT use case with a high write rate, for example, a collection containing documents with a timestamp field might approach the 500 writes per second limit.

Large array or map fields

Large array or map fields can approach the limit of 20,000 index entries per document. If you are not querying based on a large array or map field, you should exempt it from indexing.

Read and write operations

Use batch operations
for your writes and deletes instead of single operations.
Batch operations are more efficient because they perform multiple operations
with the same overhead as a single operation.

Use asynchronous calls where available instead of synchronous calls.
Asynchronous calls minimize latency impact. For example, consider an application
that needs the result of a document lookup and the results of a query before
rendering a response. If the lookup and the query do not have a data dependency,
there is no need to synchronously wait until the lookup completes before
initiating the query.

Do not use offsets. Instead, use
cursors. Using an offset only avoids
returning the skipped documents to your application, but these documents are
still retrieved internally. The skipped documents affect the latency of the
query, and your application is billed for the read operations required to
retrieve them.

Realtime updates

For the best snapshot listener performance, keep your
documents small and control the read rate of your clients. The following
recommendations provide guidelines for maximizing performance. Exceeding
these recommendations can result in increased notification latency.

Recommendation

Details

Reduce snapshot listener churn rate

1 additional snapshot listener every 30 seconds

Ideally, your application should set up all the required
snapshot listeners soon after opening a connection to
Cloud Firestore. After setting up your initial snapshot
listeners, you should avoid quickly adding or removing
snapshot listeners in the same connection.

To ensure data consistency, Cloud Firestore needs to prime
each new snapshot listener from its source data and then catch up to
new changes. Depending on your database's write rate, this can
be an expensive operation.

Your snapshot listeners can experience increased latency if you add
or remove snapshot listeners faster than 1 every 30 seconds.

Limit snapshot listeners per client

100

Keep the number of snapshot listeners per client under 100.

Limit the collection write rate

1,000 operations/second

Keep the rate of write operations for an individual collection
under 1,000 operations/second.

Limit the individual client push rate

1 document/second

Keep the rate of documents the database pushes to an individual client
under 1 document/second.

Limit the global client push rate

1,000,000 documents/second

Keep the rate of documents the database pushes to all clients
under 1,000,000 documents/second.

Limit the individual document payload

10 KiB/second

Keep the maximum document size downloaded by an individual client
under 10 KiB/second.

Limit the global document payload

1 GiB/second

Keep the maximum document size downloaded across all clients
under 1 GiB/second.

Designing for scale

The following best practices describe how to avoid situations that
create contention issues.

Updates to a single document

You should not update a single document more than once per second. If you update
a document too quickly, then your application will experience contention,
including higher latency, timeouts, and other errors.

Note: Write rates to a single document can sometimes
exceed the one per second limit so load tests might not show this problem.

High read, write, and delete rates to a narrow document range

Avoid high read or write rates to lexicographically close documents, or your
application will experience contention errors. This issue is known as
hotspotting, and your application can experience hotspotting if it does any of
the following:

Creates new documents at a very high rate and allocates its own
monotonically increasing IDs.

Cloud Firestore allocates document IDs using a scatter algorithm. You
should not encounter hotspotting on writes if you create new documents using
automatic document IDs.

Creates new documents at a high rate in a collection with few documents.

Creates new documents with a monotonically increasing field, like a
timestamp, at a very high rate.

Deletes documents in a collection at a high rate.

Writes to the database at a very high rate
without gradually increasing traffic.

Ramping up traffic

You should gradually ramp up traffic to new collections or lexicographically
close documents to give Cloud Firestore sufficient time to prepare
documents for increased traffic. We recommend starting with a maximum of 500
operations per second to a new collection and then increasing traffic by 50%
every 5 minutes. For instance, you can use this ramp up schedule to grow your
read traffic to 740K operations per second after 90 minutes. You can similarly
ramp up your write traffic, but keep in mind the Cloud Firestore
Standard Limits. Be sure that operations are
distributed relatively evenly throughout the key range. This is called the
"500/50/5" rule.

Migrating traffic to a new collection

Gradual ramp up is particularly important if you migrate app traffic from one
collection to another. A simple way to handle this migration is to read from the
old collection, and if the document does not exist, then read from the new
collection. However, this could cause a sudden increase of traffic to
lexicographically close documents in the new collection. Cloud Firestore
may be unable to efficiently prepare the new collection for increased traffic,
especially when it contains few documents.

A similar problem can occur if you change the document IDs of many documents
within the same collection.

The best strategy for migrating traffic to a new collection depends on your data
model. Below is an example strategy known as parallel reads. You will need to
determine whether or not this strategy is effective for you data, and an
important consideration will be the cost impact of parallel operations during
the migration.

Parallel reads

To implement parallel reads as you migrate traffic to a new collection, read
from the old collection first. If the document is missing, then read from the
new collection. A high rate of reads of non-existent documents can lead to
hotspotting, so be sure to gradually increase load to the new
collection. A better strategy is to copy the old document to the new collection
then delete the old document. Ramp up parallel reads gradually to ensure that
Cloud Firestore can handle traffic to the new collection.

A possible strategy for gradually ramping up reads or writes to a new collection
is to use a deterministic hash of the user ID to select a random percentage of
users attempting to write new documents. Be sure that the result of the user
ID hash is not skewed either by your function or by user behavior.

Meanwhile, run a batch job that copies all your data from the old documents to
the new collection. Your batch job should avoid writes to sequential document
IDs in order to prevent hotspots. When the batch job finishes, you can read only
from the new collection.

A refinement of this strategy is to migrate small batches of users at a time.
Add a field to the user document which tracks migration status of that user.
Select a batch of users to migrate based on a hash of the user ID. Use
a batch job to migrate documents for that batch of users, and use
parallel reads for users in the
middle of migration.

Note that you cannot easily roll back unless you do dual writes of both the old
and new entities during the migration phase. This would increase
Cloud Firestore costs incurred.