Abstract

This document defines APIs for a database of records holding
simple values and hierarchical objects. Each record consists of a key
and some value. Moreover, the database maintains indexes over records
it stores. An application developer directly uses an API to locate
records either by their key or by using an index. A query language can
be layered on this API. An indexed database can be implemented using a
persistent B-tree data structure.

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Web Platform Working Group as an Editors Draft. This document is intended to become a W3C Recommendation.

Publication as an Editors Draft does not imply endorsement by the W3C Membership. This is a draft document and may
be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite
this document as other than work in progress.

This is the Third Edition of Indexed Database API.
The First Edition became a W3C Recommendation on 8 January 2015.
The Second Edition became a W3C Recommendation on 30 January 2018.

1. Introduction

User agents need to store large numbers of objects locally in order to
satisfy off-line data requirements of Web applications. [WEBSTORAGE] is useful for storing pairs of keys and their corresponding values.
However, it does not provide in-order retrieval of keys, efficient
searching over values, or storage of duplicate values for a key.

This specification provides a concrete API to perform advanced
key-value data management that is at the heart of most sophisticated
query processors. It does so by using transactional databases to store
keys and their corresponding values (one or more per key), and
providing a means of traversing keys in a deterministic order. This is
often implemented through the use of persistent B-tree data structures
that are considered efficient for insertion and deletion as well as
in-order traversal of very large numbers of data records.

In the following example, the API is used to access a "library"
database that holds books stored by their "isbn" attribute.
Additionally, an index is maintained on the "title" attribute of the
objects stored in the object store. This index can be used to look up
books by title, and enforces a uniqueness constraint. Another index is
maintained on the "author" attribute of the objects, and can be used
to look up books by author.

A connection to the database is opened. If the "library" database did
not already exist, it is created and an event handler creates the
object store and indexes. Finally, the opened connection is saved for
use in subsequent examples.

A single database can be used by multiple clients (pages and workers)
simultaneously — transactions ensure they don’t clash while reading and writing.
If a new client wants to upgrade the database (via the upgradeneeded event), it cannot do so until all other clients close their connection to the
current version of the database.

To avoid blocking a new client from upgrading, clients can listen for the versionchange event. This fires when another client is wanting to upgrade the
database. To allow this to continue, react to the versionchange event by doing
something that ultimately closes this client’s connection to the database.

One way of doing this is to reload the page:

db.onversionchange =function(){// First, save any unsaved data:
saveUnsavedData().then(function(){// If the document isn’t being actively used, it could be appropriate to reload// the page without the user’s interaction.if(!document.hasFocus()){
location.reload();// Reloading will close the database, and also reload with the new JavaScript// and database definitions.}else{// If the document has focus, it can be too disruptive to reload the page.// Maybe ask the user to do it manually:
displayMessage("Please reload this page for the latest version.");}});};function saveUnsavedData(){// How you do this depends on your app.}function displayMessage(){// Show a non-modal message to the user.}

Another way is to call the connection's close() method. However, you need to make
sure your app is aware of this, as subsequent attempts to access the database
will fail.

db.onversionchange =function(){
saveUnsavedData().then(function(){
db.close();
stopUsingTheDatabase();});};function stopUsingTheDatabase(){// Put the app into a state where it no longer uses the database.}

The new client (the one attempting the upgrade) can use the blocked event to
detect if other clients are preventing the upgrade from happening. The blocked event fires if other clients still hold a connection to the database after their versionchange events have fired.

The user will only see the above message if another client fails to disconnect
from the database. Ideally the user will never see this.

2. Constructs

A name is a string equivalent to a DOMString;
that is, an arbitrary sequence of 16-bit code units of any length,
including the empty string. Names are always compared as
opaque sequences of 16-bit code units.

As a result, name comparison is sensitive to variations in case
as well as other minor variations such as normalization form, the
inclusion or omission of controls, and other variations in Unicode
text. [Charmod-Norm]

If an implementation uses a storage mechanism which does not support
arbitrary strings, the implementation can use an escaping mechanism
or something similar to map the provided name to a string that it
can store.

A sorted name list is a list containing names sorted in ascending order by 16-bit code unit.

Details

This matches the Array.prototype.sort on an Array of Strings. This ordering compares the 16-bit code units in each
string, producing a highly efficient, consistent, and deterministic
sort order. The resulting list will not match any particular
alphabet or lexicographical order, particularly for code points
represented by a surrogate pair.

2.1.1. Database Connection

Script does not interact with databases directly. Instead,
script has indirect access via a connection.
A connection object can be used to manipulate the objects of
that database. It is also the only way to obtain a transaction for that database.

When a connection is initially created it is in an opened
state. The connection can be closed through several means.
If the execution context where the connection was created is
destroyed (for example due to the user navigating away from that
page), the connection is closed. The connection can also be closed
explicitly using the steps to close a database connection. When
the connection is closed the close pending flag is always set if
it hasn’t already been.

A connection may be closed by a user agent in exceptional
circumstances, for example due to loss of access to the file system, a
permission change, or clearing of the origin’s storage. If this occurs
the user agent must run the steps to close a database
connection with the connection and with the forced flag set.

A versionchange will be fired at an open connection if an attempt is made to upgrade or delete the database. This gives the connection the opportunity to close
to allow the upgrade or delete to proceed.

2.2. Object Store

An object store is the primary storage mechanism for
storing data in a database.

An object store has a list of records which hold the
data stored in the object store. Each record consists of a key and a value. The list is sorted according to key in ascending order. There can never be multiple records in a given object
store with the same key.

An object store has a name, which is a name.
At any one time, the name is unique
within the database to which it belongs.

An object store optionally has a key path. If the
object store has a key path it is said to use in-line keys.
Otherwise it is said to use out-of-line keys.

2.3. Values

Each record is associated with a value. User agents must
support any serializable object. This includes simple types
such as String primitive values and Date objects as well as Object and Array instances, File objects, Blob objects, ImageData objects, and so on. Record values are
stored and retrieved by value rather than by reference; later changes
to a value have no effect on the record stored in the database.

2.4. Keys

In order to efficiently retrieve records stored in an indexed
database, each record is organized according to its key.

A key has an associated type which is one of: number, date, string, binary,
or array.

A key also has an associated value, which will
be either:
an unrestricted double if type is number or date,
a DOMString if type is string,
a list of octets if type is binary,
or a list of other keys if type is array.

As a result of the above rules, negative infinity is the lowest
possible value for a key. Number keys are less than date keys. Date keys are less than string keys. String keys are less than binary keys. Binary keys are less than array keys.
There is no highest possible key value.
This is because an array of any candidate highest key followed by another key is even higher. Members of binary keys are compared as unsigned octet values
(in the range [0, 255]) rather than signed byte values (in the range
[-128, 127]).

2.5. Key Path

A key path is a string or list of strings
that defines how to extract a key from a value. A valid key path is one of:

An empty string.

An identifier, which is a string matching the IdentifierName production from the ECMAScript Language
Specification [ECMA-262].

A string consisting of two or more identifiers separated
by periods (U+002E FULL STOP).

A non-empty list containing only strings
conforming to the above requirements.

An index is a specialized persistent key-value storage and has a referencedobject store. The
index has a list of records which hold the data stored in
the index. The records in an index are automatically populated
whenever records in the referenced object store are inserted,
updated or deleted. There can be several indexes referencing the
same object store, in which changes to the object store cause all
such indexes to get updated.

The values in the index’s records are always values of keys in the index’s referenced object store. The keys are derived from
the referenced object store’s values using a key path.
If a given record with key X in the object store referenced by
the index has the value A, and evaluating the index’s key path on A yields the result Y, then the index will contain a record
with key Y and value X.

For example, if an index’s referenced object store contains a
record with the key 123 and the value { name: "Alice", title: "CEO" }, and the index’s key path is "name" then the index would contain a record with
the key "Alice" and the value 123.

Records in an index are said to have a referenced value.
This is the value of the record in the index’s referenced object store
which has a key equal to the index’s record’s value. So in the example
above, the record in the index whose key is Y and value is X has a referenced value of A.

In the preceding example, the record in the index with key
"Alice" and value 123 would have a referenced value of { name: "Alice", title: "CEO" }. Each record in an index references one and only one record in the
index’s referenced object store. However there can be multiple
records in an index which reference the same record in the object
store. And there can also be no records in an index which reference
a given record in an object store.

The records in an index are always sorted according to the record's key. However unlike object stores, a given index can
contain multiple records with the same key. Such records are
additionally sorted according to the index's record's value
(meaning the key of the record in the referenced object store).

An index has a unique flag. When this flag is
set, the index enforces that no two records in the index has
the same key. If a record in the index’s referenced object
store is attempted to be inserted or modified such that evaluating the
index’s key path on the records new value yields a result which
already exists in the index, then the attempted modification to the
object store fails.

2.7. Transactions

A Transaction is used to interact
with the data in a database. Whenever data is read or written
to the database it is done by using a transaction.

Transactions offer some protection from application and system
failures. A transaction may be used to store multiple data
records or to conditionally modify certain data records. A transaction represents an atomic and durable set of data access
and data mutation operations.

All transactions are created through a connection, which is the
transaction’s connection.

A transaction has a scope that determines the object stores with which the transaction may interact. A
transaction’s scope remains fixed for the lifetime of that
transaction.

A transaction has a mode that determines which types
of interactions can be performed upon that transaction. The mode is set when the transaction is created and remains fixed for the life
of the transaction. A transaction's mode is one of the
following:

The transaction is only allowed to read data. No modifications can
be done by this type of transaction. This has the advantage that
several read-only transactions can run at the same time even
if their scopes are overlapping, i.e. if they are using the
same object stores. This type of transaction can be created any
time once a database has been opened.

The transaction is allowed to read, modify and delete data from
existing object stores. However object stores and indexes can’t be
added or removed. Multiple "readwrite" transactions
can’t run at the same time if their scopes are overlapping
since that would mean that they can modify each other’s data in
the middle of the transaction. This type of transaction can be
created any time once a database has been opened.

The transaction is allowed to read, modify and delete data from
existing object stores, and can also create and remove object
stores and indexes. It is the only type of transaction that can do
so. This type of transaction can’t be manually created, but
instead is created automatically when an upgradeneeded event is fired.

A transaction has an active flag, which determines
if new requests can be made against the transaction. A
transaction is said to be active if its active flag is set.

A transaction is created with a scope and a mode.
When a transaction is created its active flag is initially set.

The implementation must allow requests to be placed against the transaction whenever the active flag is set. This
is the case even if the transaction has not yet been started.
Until the transaction is started the implementation must not
execute these requests; however, the implementation must keep
track of the requests and their order. Requests may be placed
against a transaction only while that transaction is active.
If an attempt is made to place a request against a transaction
when that transaction is not active, the implementation must
reject the attempt by throwing a "TransactionInactiveError" DOMException.

Once an implementation is able to enforce the constraints defined
for the transaction scope and mode, defined below, the
implementation must queue a task to start the transaction asynchronously.

Once the transaction has been started the implementation can
start executing the requests placed against the transaction.
Unless otherwise defined, requests must be executed in the order
in which they were made against the transaction. Likewise, their
results must be returned in the order the requests were placed
against a specific transaction. There is no guarantee about the
order that results from requests in different transactions are
returned. Similarly, the transaction modes ensure that two
requests placed against different transactions can execute in any
order without affecting what resulting data is stored in the
database.

A transaction can be aborted at any time before it is finished, even if the transaction
isn’t currently active or hasn’t yet started. When a
transaction is aborted the implementation must undo (roll back)
any changes that were made to the database during that
transaction. This includes both changes to the contents of object stores as well as additions and removals of object
stores and indexes.

A transaction can fail for reasons not tied to a particular request. For example due to IO errors when committing the
transaction, or due to running into a quota limit where the
implementation can’t tie exceeding the quota to a partcular
request. In this case the implementation must run the steps to abort a transaction using the transaction as transaction and the appropriate error type as error. For example if quota
was exceeded then a "QuotaExceededError" DOMException should be used as error, and if an IO error happened, an "UnknownError" DOMException should be
used as error.

When a transaction has been started and it can no longer become active, the implementation must attempt to commit it, as long as the
transaction has not been aborted. This usually happens after
all requests placed against the transaction have been executed and
their returned results handled, and no new requests have been
placed against the transaction. When a transaction is committed,
the implementation must atomically write any changes to the database made by requests placed against the transaction. That
is, either all of the changes must be written, or if an error
occurs, such as a disk write error, the implementation must not
write any of the changes to the database. If such an error occurs,
the implementation must abort the transaction by following the
steps to abort a transaction, otherwise it must commit the transaction by following the steps to commit a transaction.

When a transaction is committed or aborted, it
is said to be finished. If a
transaction can’t be finished, for example due to the
implementation crashing or the user taking some explicit action to
cancel it, the implementation must abort the transaction.

Any number of read-only transactions are allowed to run
concurrently, even if the transaction’s scope overlap and
include the same object stores. As long as a read-only
transaction is running, the data that the implementation returns
through requests created with that transaction must remain
constant. That is, two requests to read the same piece of data
must yield the same result both for the case when data is found
and the result is that data, and for the case when data is not
found and a lack of data is indicated.

Similarly, implementations must ensure that a read/write
transaction is only affected by changes to object
stores that are made using the transaction itself. For
example, the implementation must ensure that another transaction
does not modify the contents of object stores in the read/write transaction’s scope. The implementation
must also ensure that if the read/write transaction completes successfully, the changes written to object
stores using the transaction can be committed to the database without merge conflicts. An implementation must
not abort a transaction due to merge conflicts.

If multiple read/write transactions are attempting to access
the same object store (i.e. if they have overlapping scope),
the transaction that was created first must be the transaction
which gets access to the object store first. Due to the
requirements in the previous paragraph, this also means that it is
the only transaction which has access to the object store until
the transaction is finished.

User agents must ensure a reasonable level of fairness across
transactions to prevent starvation. For example, if multiple read-only transactions are started one after another the
implementation must not indefinitely prevent a pending read/write transaction from starting.

To cleanup Indexed Database transactions, run these steps.
They will return true if any transactions were cleaned up, or false otherwise.

This behavior is invoked by [HTML]. It ensures that transactions created by a script call
to transaction() are deactivated once the task that
invoked the script has completed. The steps are run at most once for
each transaction.

When a request is made, a new request is returned with its done
flag unset. If a request completes successfully, the done flag is set, the result is set to the result of the request,
and an event with type success is fired at the request.

If an error occurs while performing the operation, the done flag is set, the error is set to the error, and an event with
type error is fired at the request.

Requests are not typically re-used, but there are exceptions. When a cursor is iterated, the success of the iteration is reported
on the same request object used to open the cursor. And when
an upgrade transaction is necessary, the same open
request is used for both the upgradeneeded event and final result of the open operation itself. In some cases,
the request’s done flag will be unset then set again, and the result can change or error could be set insead.

Open requests are processed in a connection queue.
The queue contains all open requests associated with an origin and a name. Requests added to the connection queue processed in order and each request must run
to completion before the next request is processed. An open request
may be blocked on other connections, requiring those
connections to close before the request can complete and allow
further requests to be processed.

A cursor has a direction that determines whether it
moves in monotonically increasing or decreasing order of the record keys when iterated, and if it skips duplicated values
when iterating indexes. The direction of a cursor also determines if
the cursor initial position is at the start of its source or at its end. A cursor’s direction is one of the following:

This direction causes the cursor to be opened at the start of the source. When iterated, the cursor should not yield
records with the same key, but otherwise yield all records, in
monotonically increasing order of keys. For every key with
duplicate values, only the first record is yielded. When the source is an object store or an index with
the unique flag set, this direction has exactly the same
behavior as "next".

This direction causes the cursor to be opened at the end of the source. When iterated, the cursor should not
yield records with the same key, but otherwise yield all records,
in monotonically decreasing order of keys. For every key with
duplicate values, only the first record is yielded. When the source is an object store or an index with the unique flag set, this direction
has exactly the same behavior as "prev".

A cursor has a position within its range. It is
possible for the list of records which the cursor is iterating over to
change before the full range of the cursor has been iterated.
In order to handle this, cursors maintain their position not as
an index, but rather as a key of the previously returned
record. For a forward iterating cursor, the next time the cursor is
asked to iterate to the next record it returns the record with the
lowest keygreater than the one previously
returned. For a backwards iterating cursor, the situation is opposite
and it returns the record with the highest keyless
than the one previously returned.

For cursors iterating indexes the situation is a little bit more
complicated since multiple records can have the same key and are
therefore also sorted by value. When iterating indexes the cursor also has an object store position, which
indicates the value of the previously found record in
the index. Both position and the object store position are used when finding the next appropriate record.

A cursor has a got value flag. When this flag unset,
the cursor is either in the process of loading the next value or it
has reached the end of its range. When it is set, it indicates
that the cursor is currently holding a value and that it is ready to
iterate to the next one.

Every object store that uses key generators uses a separate
generator. That is, interacting with one object store never affects
the key generator of any other object store.

Modifying a key generator’s current number is considered part
of a database operation. This means that if the operation fails
and the operation is reverted, the current number is
reverted to the value it had before the operation started. This
applies both to modifications that happen due to the current
number getting increased by 1 when the key generator is used,
and to modifications that happen due to a record being
stored with a key value specified in the call to store the record.

The current number for a key generator never decreases, other
than as a result of database operations being reverted. Deleting a record from an object store never affects the
object store’s key generator. Even clearing all records from an
object store, for example using the clear() method, does not
affect the current number of the object store’s key
generator.

When a record is stored and a key is not specified
in the call to store the record, a key is generated.

A key can be specified both for object stores which use in-line keys, by setting the property on the stored value
which the object store’s key path points to,
and for object stores which use out-of-line keys, by passing
a key argument to the call to store the record.

Only specified keys of typenumber can affect the current number of the key generator. Keys of typedate, array (regardless of the other keys they
contain), binary, or string (regardless of whether
they could be parsed as numbers) have no effect on the current
number of the key generator. Keys of typenumber with value less than 1 do not affect the current number since they are always lower than the current number.

When the current number of a key generator reaches above the
value 253 (9007199254740992) any subsequent attempts to use the
key generator to generate a new key will result in a
"ConstraintError" DOMException. It is still possible to insert records into the object store by specifying an explicit
key, however the only way to use a key generator again for such records
is to delete the object store and create a new one.

This limit arises because integers greater than 9007199254740992
cannot be uniquely represented as ECMAScript Numbers.
As an example, 9007199254740992 + 1 === 9007199254740992 in ECMAScript.

As long as key generators are used in a normal fashion this limit will
not be a problem. If you generate a new key 1000 times per
second day and night, you won’t run into this limit for over
285000 years.

A practical result of this is that the first key generated for an
object store is always 1 (unless a higher numeric key is inserted
first) and the key generated for an object store is always a positive
integer higher than the highest numeric key in the store. The same key
is never generated twice for the same object store unless a
transaction is rolled back.

Inserting an item with an explicit key affects the key generator if,
and only if, the key is numeric and higher than the last generated
key.

store = db.createObjectStore("store1",{ autoIncrement:true});
store.put("a");// Will get key 1
store.put("b",3);// Will use key 3
store.put("c");// Will get key 4
store.put("d",-10);// Will use key -10
store.put("e");// Will get key 5
store.put("f",6.00001);// Will use key 6.0001
store.put("g");// Will get key 7
store.put("f",8.9999);// Will use key 8.9999
store.put("g");// Will get key 9
store.put("h","foo");// Will use key "foo"
store.put("i");// Will get key 10
store.put("j",[1000]);// Will use key [1000]
store.put("k");// Will get key 11// All of these would behave the same if the objectStore used a// keyPath and the explicit key was passed inline in the object

Aborting a transaction rolls back any increases to the key generator
which happened during the transaction. This is to make all rollbacks
consistent since rollbacks that happen due to crash never has a chance
to commit the increased key generator value.

Then the value provided by the key generator is used to populate
the key value. In the example below the key path for
the object store is "foo.bar". The actual object has no
value for the bar property, { foo: {} }.
When the object is saved in the object store the bar property is assigned a value of 1 because that is the next key generated by the key generator.

Then the value associated with the key path property is used. The auto-generated key is not used. In the
example below the key path for the object
store is "foo.bar". The actual object has a value of
10 for the bar property, { foo: { bar: 10}
}. When the object is saved in the object store the bar property keeps its value of 10, because that is the
key value.

The following example illustrates the scenario when the specified
in-line key is defined through a key
path but there is no property matching it. The value provided by
the key generator is then used to populate the key value and
the system is responsible for creating as many properties as it
requires to suffice the property dependencies on the hierarchy chain.
In the example below the key path for the object store is "foo.bar.baz". The actual
object has no value for the foo property, { zip: {}
}. When the object is saved in the object store the foo, bar, and baz properties are created each as a child of the other until a value for foo.bar.baz can be assigned. The value for foo.bar.baz is the next key generated by the object
store.

Attempting to store a property on a primitive value will fail and
throw an error. In the first example below the key
path for the object store is "foo". The actual object
is a primitive with the value, 4. Trying to define a
property on that primitive value fails. The same is true for arrays.
Properties are not allowed on an array. In the second example below,
the actual object is an array, [10]. Trying to define a
property on the array fails.

var store = db.createObjectStore("store",{ keyPath:"foo", autoIncrement:true});// The key generation will attempt to create and store the key path// property on this primitive.
store.put(4);// will throw DataError// The key generation will attempt to create and store the key path// property on this array.
store.put([10]);// will throw DataError

3. Exceptions

Each of the exceptions used in this document is a DOMException with a specific type. The exception types and
properties such as legacy code value are defined in [WEBIDL].

The table below lists the DOMExceptions used in this
document along with a description of the exception type’s
usage.

An attempt was made to open a database using a lower version
than the existing version.

Given that multiple Indexed DB operations can throw the same type of
error, and that a even single operation can throw the same type of
error for multiple reasons, implementations are encouraged to
provide more specific messages to enable developers to identify the
cause of errors.

4. API

The API methods return without blocking the calling thread. All
asynchronous operations immediately return an IDBRequest instance. This object does not initially contain any information about
the result of the operation. Once information becomes available, an
event is fired on the request and the information becomes available
through the properties of the IDBRequest instance.

Every method for making asynchronous requests returns an IDBRequest object that communicates back to the requesting
application through events. This design means that any number of
requests can be active on any database at a time.

In the following example, we open a database asynchronously.
Various event handlers are registered for responding to various
situations.

Database objects are accessed through methods on the IDBFactory interface. A single object implementing this
interface is present in the global scope of environments that support
Indexed DB operations.

Attempts to open a connection to the named database with the specified version. If the database already exists
with a lower version and there are open connections that don’t close in response to a versionchange event, the request will be
blocked until all they close, then an upgrade
will occur. If the database already exists with a higher
version the request will fail. If the request is
successful request’s result will
be the connection.

Attempts to delete the named database. If the
database already exists and there are open connections that don’t close in response to a versionchange event, the request will be
blocked until all they close. If the request
is successful request’s result will be null.

Returns a promise which resolves to a list of objects giving a snapshot
of the names and versions of databases within the origin.

This API is intended for web applications to introspect the use of databases,
for example to clean up from earlier versions of a site’s code. Note that
the result is a snapshot; there are no guarantees about the sequencing of the
collection of the data or the delivery of the response with respect to requests
to create, upgrade, or delete databases by this context or others.

Let result be the result of running the steps to open a database, with origin, name, version if given and undefined
otherwise, and request.

What happens if version is not given?
If version is not given and a database with that name already exists, a connection will be opened
without changing the version. If version is not given and no database with
that name exists, a new database will be created with version equal to 1.

If the steps above resulted in an upgrade
transaction being run, these steps will run after
that transaction finishes. This ensures that in the
case where another version upgrade is about to happen,
the success event is fired on the connection first so
that the script gets a chance to register a listener
for the versionchange event. Why aren’t the steps to fire a success event or fire an error event used?
There is no transaction associated with the request (at
this point), so those steps — which activate an
associated transaction before dispatch and deactivate
the transaction after dispatch — do not apply.

Why aren’t the steps to fire a success event or fire an error event used?
There is no transaction associated with the request, so
those steps — which activate an associated
transaction before dispatch and deactivate the
transaction after dispatch — do not apply.

In some implementations it is possible for the implementation to run
into problems after queuing a task to create the object store after the createObjectStore() method has returned. For example in
implementations where metadata about the newly created object
store is inserted into the database asynchronously, or where the
implementation might need to ask the user for permission for quota
reasons. Such implementations must still create and return an IDBObjectStore object, and once the implementation determines that
creating the object store has failed, it must abort the
transaction using the steps to abort a transaction using the
appropriate error. For example if creating the object store failed due to quota reasons, a "QuotaExceededError" DOMException must be used as
error.

The deleteObjectStore(name) method, when invoked, must run these steps:

The returned value is not the same instance that was used when the object store was created. However, if this attribute returns
an object (specifically an Array), it returns the same object
instance every time it is inspected. Changing the properties of the
object has no effect on the object store.

Why create a copy of the value?
The value is be serialized when stored. Treating it as a copy
here allows other algorithms in this specification to treat it as
an ECMAScript value, but implementations can optimize this
if the difference in behavior is not observable.

Why create a copy of the value?
The value is serialized when stored. Treating it as a copy
here allows other algorithms in this specification to treat it as
an ECMAScript value, but implementations can optimize this
if the difference in behavior is not observable.

The query parameter may be a key or an IDBKeyRange identifying the record to be retrieved. If a
range is specified, the method retrieves the first existing value in
that range.

This method produces the same result if a record with the given key
doesn’t exist as when a record exists, but has undefined as value.
If you need to tell the two situations apart, you can use openCursor() with the same key. This will return
a cursor with undefined as value if a record exists, or no cursor if
no such record exists.

The query parameter may be a key or an IDBKeyRange identifying the records to be retrieved. If null or not given,
an unbounded key range is used. If count is specified and
there are more than count records in range, only the first count will be retrieved.

The getAllKeys(query, count) method, when invoked, must run these steps:

The query parameter may be a key or an IDBKeyRange identifying the records keys to be retrieved. If null or not
given, an unbounded key range is used. If count is specified
and there are more than count keys in range, only the first count will be retrieved.

The index that is requested to be created can contain constraints on
the data allowed in the index’s referenced object store, such
as requiring uniqueness of the values referenced by the index’s key path. If the referenced object store already contains data
which violates these constraints, this must not cause the
implementation of createIndex() to throw an
exception or affect what it returns. The implementation must still
create and return an IDBIndex object, and the implementation must queue a task to abort the upgrade transaction which was
used for the createIndex() call.

This method synchronously modifies the indexNames property on the IDBObjectStore instance on which it was called.
Although this method does not return an IDBRequest object, the
index creation itself is processed as an asynchronous request within
the upgrade transaction.

In some implementations it is possible for the implementation to
asynchronously run into problems creating the index after the
createIndex method has returned. For example in implementations where
metadata about the newly created index is queued up to be inserted
into the database asynchronously, or where the implementation might
need to ask the user for permission for quota reasons. Such
implementations must still create and return an IDBIndex object,
and once the implementation determines that creating the index has
failed, it must abort the transaction using the steps to abort
a transaction using an appropriate error as error. For example
if creating the index failed due to quota reasons,
a "QuotaExceededError" DOMException must be used as error and if the index can’t be
created due to unique flag constraints, a "ConstraintError" DOMException must be used as error.

The asynchronous creation of indexes is observable in the following example:

At the point where createIndex() called, neither of the requests have executed. When the second request executes, a
duplicate name is created. Since the index creation is considered an
asynchronous request, the index’s uniqueness constraint does not cause the second request to fail. Instead, the transaction will
be aborted when the index is created and the constraint
fails.

This method synchronously modifies the indexNames property on the IDBObjectStore instance on which it was called.
Although this method does not return an IDBRequest object, the
index destruction itself is processed as an asynchronous request
within the upgrade transaction.

The returned value is not the same instance that was used when the index was created. However, if this attribute returns an
object (specifically an Array), it returns the same object
instance every time it is inspected. Changing the properties of the
object has no effect on the index.

The query parameter may be a key or an IDBKeyRange identifying the record to be retrieved. If a
range is specified, the method retrieves the first existing record in
that range.

This method produces the same result if a record with the given key
doesn’t exist as when a record exists, but has undefined as value.
If you need to tell the two situations apart, you can use openCursor() with the same key. This will return a
cursor with undefined as value if a record exists, or no cursor if
no such record exists.

The query parameter may be a key or an IDBKeyRange identifying the records to be retrieved. If null or not given,
an unbounded key range is used. If count is specified and
there are more than count records in range, only the first count will be retrieved.

The getAllKeys(query, count) method, when invoked, must run these steps:

The query parameter may be a key or an IDBKeyRange identifying the records keys to be retrieved. If null or not
given, an unbounded key range is used. If count is specified
and there are more than count keys in range, only the first count will be retrieved.

The source attribute’s getter must
return the source of this cursor. This
attribute never returns null or throws an exception, even if the
cursor is currently being iterated, has iterated past its end, or its transaction is not active.

The key attribute’s getter must
return the result of running the steps to convert a key to a
value with the cursor’s current key. Note that
if this property returns an object (e.g. a Date or Array), it returns the same object instance every time it is
inspected, until the cursor’s key is changed. This
means that if the object is modified, those modifications will be seen
by anyone inspecting the value of the cursor. However modifying such
an object does not modify the contents of the database.

The primaryKey attribute’s getter
must return the result of running the steps to convert a key to a
value with the cursor’s current effective key. Note that if
this property returns an object (e.g. a Date or Array),
it returns the same object instance every time it is inspected, until
the cursor’s effective key is changed. This means that if the
object is modified, those modifications will be seen by anyone
inspecting the value of the cursor. However modifying such an object
does not modify the contents of the database.

The following methods advance a cursor. Once the cursor has
advanced, a success event will be fired at the
same IDBRequest returned when the cursor was opened. The result will be the same cursor if a record was
in range, or undefined otherwise.

Calling this method more than once before new cursor data has been
loaded - for example, calling advance() twice from the
same onsuccess handler - results in an "InvalidStateError" DOMException being thrown on the second call because the cursor’s got value flag has been unset.

Calling this method more than once before new cursor data has been
loaded - for example, calling continue() twice from the
same onsuccess handler - results in an "InvalidStateError" DOMException being thrown on the second call because the cursor’s got value flag has been unset.

The continuePrimaryKey(key, primaryKey) method, when invoked, must run these steps:

Why create a copy of the value?
The value is serialized when stored. Treating it as a copy
here allows other algorithms in this specification to treat it as
an ECMAScript value, but implementations can optimize this
if the difference in behavior is not observable.

The value attribute’s
getter must return the cursor’s current value. Note
that if this property returns an object, it returns the same object
instance every time it is inspected, until the cursor’s value is changed. This means that if the object is
modified, those modifications will be seen by anyone inspecting the
value of the cursor. However modifying such an object does not modify
the contents of the database.

The contents of each list returned by this attribute does not
change, but subsequent calls to this attribute during an upgrade
transaction can return lists with different contents as object stores are created and deleted.

5. Algorithms

5.1. Opening a database

The steps to open a database are as follows.
The algorithm in these steps takes four arguments:
the origin which requested the database to be opened, a
database name, a database version, and a request.

The close event only fires if the connection closes
abnormally, e.g. if the origin’s storage is cleared, or there is
corruption or an I/O error. If close() is called explicitly
the event does not fire.

Even if an exception is thrown from one of the event handlers of
this event, the transaction is still committed since writing the
database changes happens before the event takes places. Only
after the transaction has been successfully written is the complete event fired.

This does not always result in any error events
being fired. For example if a transaction is aborted due to an
error while committing the transaction,
or if it was the last remaining request that failed.

5.7. Running an upgrade transaction

The steps to run an upgrade transaction are as
follows. This algorithm takes three arguments: a connection object
which is used to update the database, a new version to be set
for the database, and a request.

For each index handlehandle associated with transaction,
including those for indexes that were created or deleted
during transaction:

If handle’s index was not newly created
during transaction, set handle’s name to
its index's name.

This reverts the value of name returned by related IDBIndex objects. How is this observable?
Although script cannot access an index by using the index() method on an IDBObjectStore instance after the transaction is aborted, it can still have references to IDBIndex instances where the name property can
be queried.

This means that if an error event is fired and any of the event
handlers throw an exception, transaction’s error property is set to an AbortError rather than request’s error, even if preventDefault() is never called.

If the no-overwrite flag was given to these steps and is set, and
a record already exists in store with its key equal tokey, then this operation failed with a "ConstraintError" DOMException.
Abort this algorithm without taking any further steps.

If index’s multiEntry flag is unset, or if index key is not an array key then store a record in index containing index key as its key and key as its value. The
record is stored in index’s list of records such that the list is sorted primarily on the records keys,
and secondarily on the records values, in ascending order.

If index’s multiEntry flag is set and index key is
an array key, then for each subkey of the subkeys of index key store a record in index containing subkey as its key and key as its value. The
records are stored in index’s list of
records such that the list is sorted primarily on the
records keys, and secondarily on the records values, in ascending order.

It is valid for there to be no subkeys. In this case
no records are added to the index. Even if any member of subkeys is itself an array key,
the member is used directly as the key for the index record.
Nested array keys are not flattened or "unpacked" to
produce multiple rows; only the outer-most array key is.

Return key.

6.2. Object Store Retrieval Operations

The steps to retrieve a value from an object store with targetRealm, store and range are as follows:

7. ECMAScript binding

This section defines how key values defined in this specification
are converted to and from ECMAScript values, and how they may be
extracted from and injected into ECMAScript values using key
paths. This section references types and algorithms and uses some
algorithm conventions from the ECMAScript Language Specification. [ECMA-262] Conversions not detailed here are defined in [WEBIDL].

7.1. Extract a key from a value

The steps to extract a key from a value using a key path with value, keyPath and an optional multiEntry flag are as
follows. The result of these steps is a key, invalid, or
failure, or the steps may throw an exception.

7.4. Convert a value to a key

The steps to convert a value to a key are as follows. These
steps take two arguments, an ECMAScript value input, and an optional
set seen. The result of these steps is a key or invalid, or the
steps may throw an exception.

The steps to convert a value to a multiEntry key are as
follows. These steps take one argument, an ECMAScript value input.
The result of these steps is a key or invalid, or the
steps may throw an exception.

Return a new array key with value set to
a list of the members of keys.

Otherwise, return the result of running the steps to convert a
value to a key with argument input.
Rethrow any exceptions.

These steps are similar to those to convert a value to a key but if the top-level value is an Array then members which can
not be converted to keys are ignored, and duplicates are removed.

For example, the value [10, 20, null, 30, 20] is
converted to an array key with subkeys 10, 20, 30.

8. Privacy Considerations

This section is non-normative.

8.1. User tracking

A third-party host (or any object capable of getting content
distributed to multiple sites) could use a unique identifier stored in
its client-side database to track a user across multiple sessions,
building a profile of the user’s activities. In conjunction with a
site that is aware of the user’s real id object (for example an
e-commerce site that requires authenticated credentials), this could
allow oppressive groups to target individuals with greater accuracy
than in a world with purely anonymous Web usage.

There are a number of techniques that can be used to mitigate the risk
of user tracking:

Blocking third-party storage

User agents may restrict access to the database objects
to scripts originating at the domain of the top-level document of
the browsing context, for instance denying access to
the API for pages from other domains running in iframes.

Expiring stored data

User agents may automatically delete stored data after a period of
time.

This can restrict the ability of a site to track a user, as the site
would then only be able to track the user across multiple sessions
when she authenticates with the site itself (e.g. by making a purchase
or logging in to a service).

However, this also puts the user’s data at risk.

Treating persistent storage as cookies

User agents should present the database feature to the user in a way
that associates them strongly with HTTP session cookies. [COOKIES]

This might encourage users to view such storage with healthy
suspicion.

Site-specific safe-listing of access to databases

User agents may require the user to authorize access to databases
before a site can use the feature.

Origin-tracking of stored data

User agents may record the origins of sites that contained content
from third-party origins that caused data to be stored.

If this information is then used to present the view of data
currently in persistent storage, it would allow the user to make
informed decisions about which parts of the persistent storage to
prune. Combined with a blocklist ("delete this data and prevent
this domain from ever storing data again"), the user can restrict
the use of persistent storage to sites that she trusts.

This would allow communities to act together to protect their
privacy.

While these suggestions prevent trivial use of this API for user
tracking, they do not block it altogether. Within a single domain, a
site can continue to track the user during a session, and can then
pass all this information to the third party along with any
identifying information (names, credit card numbers, addresses)
obtained by the site. If a third party cooperates with multiple
sites to obtain such information, a profile can still be
created.

However, user tracking is to some extent possible even with no
cooperation from the user agent whatsoever, for instance by using
session identifiers in URLs, a technique already commonly used for
innocuous purposes but easily repurposed for user tracking (even
retroactively). This information can then be shared with other
sites, using visitors' IP addresses and other user-specific
data (e.g. user-agent headers and configuration settings) to combine
separate sessions into coherent user profiles.

8.2. Cookie resurrection

If the user interface for persistent storage presents data in the
persistent storage features described in this specification separately
from data in HTTP session cookies, then users are likely to delete
data in one and not the other. This would allow sites to use the two
features as redundant backup for each other, defeating a user’s
attempts to protect his privacy.

8.3. Sensitivity of data

User agents should treat persistently stored data as potentially
sensitive; it is quite possible for e-mails, calendar appointments,
health records, or other confidential documents to be stored in this
mechanism.

To this end, user agents should ensure that when deleting data,
it is promptly deleted from the underlying storage.

9. Security Considerations

9.1. DNS spoofing attacks

Because of the potential for DNS spoofing attacks, one cannot
guarantee that a host claiming to be in a certain domain really is
from that domain. To mitigate this, pages can use TLS. Pages using TLS
can be sure that only pages using TLS that have certificates
identifying them as being from the same domain can access their
databases.

9.2. Cross-directory attacks

Different authors sharing one host name, for example users hosting
content on geocities.com, all share one set of databases.

There is no feature to restrict the access by pathname. Authors on
shared hosts are therefore recommended to avoid using these features,
as it would be trivial for other authors to read the data and
overwrite it.

Even if a path-restriction feature was made available, the usual DOM
scripting security model would make it trivial to bypass this
protection and access the data from any path.

9.3. Implementation risks

The two primary risks when implementing these persistent storage
features are letting hostile sites read information from other
domains, and letting hostile sites write information that is then read
from other domains.

Letting third-party sites read data that is not supposed to be read
from their domain causes information leakage, For example, a
user’s shopping wish list on one domain could be used by another
domain for targeted advertising; or a user’s work-in-progress
confidential documents stored by a word-processing site could be
examined by the site of a competing company.

Letting third-party sites write data to the persistent storage of
other domains can result in information spoofing, which is
equally dangerous. For example, a hostile site could add records to a
user’s wish list; or a hostile site could set a user’s session
identifier to a known ID that the hostile site can then use to track
the user’s actions on the victim site.

Thus, strictly following the origin model described in
this specification is important for user security.

If origins or database names are used to construct paths for
persistence to a file system they must be appropriately escaped to
prevent an adversary from accessing information from other origins
using relative paths such as "../".

9.4. Persistence risks

Practical implementations will persist data to a non-volatile storage
medium. Data will be serialized when stored and deserialized when
retrieved, although the details of the serialization format will be
user-agent specific. User agents are likely to change their
serialization format over time. For example, the format may be updated
to handle new data types, or to improve performance. To satisfy the
operational requirements of this specification, implementations must
therefore handle older serialization formats in some way. Improper
handling of older data can result in security issues. In addition to
basic serialization concerns, serialized data could encode assumptions
which are not valid in newer versions of the user agent.

A practical example of this is the RegExp type. The StructuredSerializeForStorage operation allows serializing RegExp objects. A typical user agent will compile a regular expression into
native machine instructions, with assumptions about how the input data
is passed and results returned. If this internal state was serialized
as part of the data stored to the database, various problems could
arise when the internal representation was later deserialized. For
example, the means by which data was passed into the code could have
changed. Security bugs in the compiler output could have been
identified and fixed in updates to the user agent, but remain in the
serialized internal state.

User agents must identify and handle older data appropriately. One
approach is to include version identifiers in the serialization
format, and to reconstruct any internal state from script-visible
state when older data is encountered.

Conformance

Document conventions

Conformance requirements are expressed with a combination of
descriptive assertions and RFC 2119 terminology. The key words “MUST”,
“MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”,
“RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this
document are to be interpreted as described in RFC 2119.
However, for readability, these words do not appear in all uppercase
letters in this specification.

All of the text of this specification is normative except sections
explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example”
or are set apart from the normative text with class="example",
like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the
normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as
"strip any leading space characters" or "return false and abort these
steps") are to be interpreted with the meaning of the key word ("must",
"should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be
implemented in any manner, so long as the end result is equivalent. In
particular, the algorithms defined in this specification are intended to
be easy to understand and are not intended to be performant. Implementers
are encouraged to optimize.