This chapter contains procedures that may be generally useful to
the Oracle NoSQL Database administrator.

Note

Oracle NoSQL Database Storage Nodes and Admins make use of an embedded
database (Oracle Berkeley DB, Java Edition). You should never
directly manipulate the files maintained by this database. In
general it is a bad idea to move, delete or modify the files
and directories located under KVROOT unless you are asked to do
so by Oracle Customer Support. But in particular,
never move or delete any file ending with
a jdb suffix. These will all be found in an
env directory somewhere under KVROOT.

Backing Up the Store

To back up the KVStore, you take snapshots of nodes in the store
and copy the resulting snapshots to a safe location.
Note that the distributed nature and scale of Oracle NoSQL Database makes it
unlikely that a single machine can hold the backup for the
entire store. These instructions do not address where and how
snapshots are stored.

Taking a Snapshot

A snapshot provides consistency across all records
within the same shard, but not across partitions in
independent shards. The underlying snapshot
operations are performed in parallel to the extent
possible in order to minimize any potential inconsistencies.

To take a snapshot from the admin CLI,
use the snapshot create command:

kv-> snapshot create -name <snapshot name>

Using this command, you can create or remove a named
snapshot. (The name of the snapshot is provided using the
<name> parameter.) You can also remove all snapshots
currently stored in the store.

kv-> snapshot create -name thursday
Created snapshot named 110915-153700-thursday on all 3 nodes
kv-> snapshot create -name later
Created snapshot named 110915-153710-later on all 3 nodes
kv-> snapshot remove -all
Removed all snapshots

Note

Snapshots should not be taken while any configuration
(topological) changes are being made, because the
snapshot might be inconsistent and not usable. At the time
of the snapshot, use ping and then save the information that identifies masters
for later use during a load or restore.
For more information,
see Snapshot Management.

Snapshot Management

When you run a snapshot, data is collected from every
Replication Node in the system, including both masters and
replicas. If the operation does not succeed for at least
one of the nodes in a shard, it fails.

If you decide to create an off-store copy of the snapshot,
you should copy the snapshot data for only one of the nodes in
each shard. If possible, copy the snapshot data
taken from the node that was serving as the master at the
time the snapshot was taken.

At the time of the snapshot, you can identify which nodes are
currently running as the master using the ping command. There is a master for
each shard in the store and they are identified
by the keyword: MASTER. For example, in the following
example, replication node rg1-rn1, running on Storage Node sn1,
is the current master: