You're viewing the legacy docs. They are deprecated as of May 18, 2016.

REST Guide

Understanding Data

It's a JSON Tree

All data is stored as JSON objects. There are no tables or records.
When we add data to the JSON tree, it becomes a key in the existing
JSON structure. For example, if we added a child widgets
under users/mchen/, our data looks as follows:

Referencing a Firebase URL

To read and write Firebase data through the REST API, we include a URL to our Firebase database in the cURL request. This URL will be where all of our data is stored. In this example, we'll use the url https://docs-examples.firebaseio.com/rest/data.

Firebase also provides an admin interface, which displays a visual representation of the
data and provides tools for simple Administrative tasks. This is referred to as the App Dashboard
All the data in this guide is stored in the docs-examples database; a read-only version of the
App Dashboard for docs-examples can be viewed by
going to the URL in a browser.

It's possible to directly access child nodes in the data as well. For example, to point to
Mary Chen's name, simply append users/mchen/name to the URL:

However, to help people that are storing arrays in the database, when data is read using val()
or via the REST api, if the data looks like
an array, the server will render it as an array. In particular, if all of the keys are integers,
and more than half of the keys between 0 and the maximum key in the object have non-empty values,
then it is treated as an array. This latter part is important to keep in mind.

// we send this
['a', 'b', 'c', 'd', 'e']
// Firebase stores this
{0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'}
// since the keys are numeric and sequential,
// if we query the data, we get this
['a', 'b', 'c', 'd', 'e']
// however, if we then delete a, b, and d,
// they are no longer mostly sequential, so
// we do not get back an array
{2: 'c', 4: 'e'}

It's not currently possible to change or prevent this behavior. Hopefully understanding it will make it
easier to see what one can and can't do when storing array-like data.

Why not just provide full array support? Since array indices are not permanent, unique IDs,
concurrent real-time editing will always be problematic.

Consider, for example, if three users simultaneously updated an array on a remote service. If user A
attempts to change the value at key 2, user B attempts to move it, and user C attempts to change it,
the results could be disastrous. For example, among many other ways this could fail, here's one:

// starting data
['a', 'b', 'c', 'd', 'e']
// record at key 2 moved to position 5 by user A
// record at key 2 is removed by user B
// record at key 2 is updated by user C to foo
// what ideally should have happened
['a', 'b', 'd', 'e']
// what actually happened
['a', 'c', 'foo', 'b']

So when is it okay to use an array? If all of the following are true, it's okay to store the
array in Firebase:

Only one client is capable of writing to the data at a time

To remove keys, we save the entire array instead of using .remove()

We take extra care when referring to anything by array index (a mutable key)

Backups and Restores

Firebase performs automated backups of all Firebase databases daily. The backups are stored for 60 days at an
off-site facility. Since these backups are done at the hardware level, they do not affect your bandwidth usage
or performance. These backups are primarily for disaster recovery, but can be made available to developers on
a case-by-case basis for purposes of emergency restores.

Firebase also offers optional, private backups
to a Google Cloud Storage (GCS) bucket or an Amazon Simple Storage Solution (S3) bucket, for databases which have
upgraded to the Bonfire, Blaze, or Inferno plan. Since these backups are done at the hardware level, they does
not count against your bandwidth usage and do not affect performance of the database. Email
firebase-support@google.com to enable this feature for your database.

It is also possible to create manual backups via the REST API. For databases with less than 200MB of data, this can
be done by simply requesting the entire database using the root URL. For larger instances, you should break up
your data by path or by key and retrieve it in smaller chunks.

Keep in mind that backing up data via the REST API does count against your bandwidth usage, and it can affect
performance. Backups of large data (gigabytes) should be spread over a large time frame to reduce the impact on
clients connected to the database.

For more information about chunking data and creating backups of big data, see the REST API's
query parameters.
Used together, startAt, limitToFirst, and
shallow=true arguments can be used to
index keys for any amount of data, and retrieve it in manageable segments.