Instance Method Details

The BatchGetItem operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. BatchGetItem returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value for UnprocessedKeys. You can use this value to retry the operation starting with the next item to get.

If you request more than 100 items, BatchGetItem returns a ValidationException with the message "Too many items requested for the BatchGetItem call."

For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate UnprocessedKeys value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.

If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchGetItem returns a ProvisionedThroughputExceededException. If at least one of the items is successfully processed, then BatchGetItem completes successfully, while returning the keys of the unread items in UnprocessedKeys.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

By default, BatchGetItem performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can set ConsistentRead to true for any or all tables.

In order to minimize response latency, BatchGetItem retrieves items in parallel.

When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the ProjectionExpression parameter.

If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.

A map of one or more table names and, for each table, a map that
describes one or more items to retrieve from that table. Each table name
can be used only once per BatchGetItem request.

Each element in the map of items to retrieve consists of the following:

ConsistentRead - If true, a strongly consistent read is used; if
false (the default), an eventually consistent read is used.

ExpressionAttributeNames - One or more substitution tokens for
attribute names in the ProjectionExpression parameter. The following
are some use cases for using ExpressionAttributeNames:

To access an attribute whose name conflicts with a DynamoDB reserved
word.

To create a placeholder for repeating occurrences of an attribute
name in an expression.

To prevent special characters in an attribute name from being
misinterpreted in an expression.

Use the # character in an expression to dereference an attribute
name. For example, consider the following attribute name:

Percentile

^

The name of this attribute conflicts with a reserved word, so it
cannot be used directly in an expression. (For the complete list of
reserved words, see Reserved Words in the Amazon DynamoDB
Developer Guide). To work around this, you could specify the
following for ExpressionAttributeNames:

`{"#P":"Percentile"}`

^

You could then use this substitution in an expression, as in this
example:

#P = :val

^

Tokens that begin with the : character are expression attribute
values, which are placeholders for the actual value at runtime.

Keys - An array of primary key attribute values that define specific
items in the table. For each primary key, you must provide all of
the key attributes. For example, with a simple primary key, you only
need to provide the partition key value. For a composite key, you must
provide both the partition key value and the sort key value.

ProjectionExpression - A string that identifies one or more
attributes to retrieve from the table. These attributes can include
scalars, sets, or elements of a JSON document. The attributes in the
expression must be separated by commas.

If no attribute names are specified, then all attributes are returned.
If any of the requested attributes are not found, they do not appear
in the result.

The BatchWriteItem operation puts or deletes multiple items in one or more tables. A single call to BatchWriteItem can write up to 16 MB of data, which can comprise as many as 25 put or delete requests. Individual items to be written can be as large as 400 KB.

The individual PutItem and DeleteItem operations specified in BatchWriteItem are atomic; however BatchWriteItem as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in the UnprocessedItems response parameter. You can investigate and optionally resend the requests. Typically, you would call BatchWriteItem in a loop. Each iteration would check for unprocessed items and submit a new BatchWriteItem request with those unprocessed items until all items have been processed.

If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then BatchWriteItem returns a ProvisionedThroughputExceededException.

If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.

With BatchWriteItem, you can efficiently write or delete large amounts of data, such as from Amazon EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations, BatchWriteItem does not behave in the same way as individual PutItem and DeleteItem calls would. For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.

If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, you must update or delete the specified items one at a time. In both situations, BatchWriteItem performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.

Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.

If one or more of the following is true, DynamoDB rejects the entire batch write operation:

One or more tables specified in the BatchWriteItem request does not exist.

Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.

You try to perform multiple operations on the same item in the same BatchWriteItem request. For example, you cannot put and delete the same item in the same BatchWriteItem request.

Your request contains at least two items with identical hash and range keys (which essentially is two put operations).

A map of one or more table names and, for each table, a list of
operations to be performed (DeleteRequest or PutRequest). Each
element in the map consists of the following:

DeleteRequest - Perform a DeleteItem operation on the specified
item. The item to be deleted is identified by a Key subelement:

Key - A map of primary key attribute values that uniquely identify
the item. Each entry in this map consists of an attribute name and
an attribute value. For each primary key, you must provide all of
the key attributes. For example, with a simple primary key, you only
need to provide a value for the partition key. For a composite
primary key, you must provide values for both the partition key
and the sort key.

^

PutRequest - Perform a PutItem operation on the specified item.
The item to be put is identified by an Item subelement:

Item - A map of attributes and their values. Each entry in this
map consists of an attribute name and an attribute value. Attribute
values must not be null; string and binary type attributes must have
lengths greater than zero; and set type attributes must not be
empty. Requests that contain empty values are rejected with a
ValidationException exception.

If you specify any attributes that are part of an index key, then
the data types for those attributes must match those of the schema
in the table\'s attribute definition.

:return_consumed_capacity(String)
—

Determines the level of detail about provisioned throughput consumption that is returned in the response:

INDEXES - The response includes the aggregate ConsumedCapacity for the operation, together with ConsumedCapacity for each table and secondary index that was accessed.

Note that some operations, such as GetItem and BatchGetItem, do not access any indexes at all. In these cases, specifying INDEXES will only return ConsumedCapacity information for table(s).

TOTAL - The response includes only the aggregate ConsumedCapacity for the operation.

NONE - No ConsumedCapacity details are included in the response.

:return_item_collection_metrics(String)
—

Determines whether item collection metrics are returned. If set to
SIZE, the response includes statistics about item collections, if any,
that were modified during the operation are returned in the response. If
set to NONE (the default), no statistics are returned.

Specifies the attributes that make up the primary key for a table or an
index. The attributes in KeySchema must also be defined in the
AttributeDefinitions array. For more information, see Data Model
in the Amazon DynamoDB Developer Guide.

Each KeySchemaElement in the array is composed of:

AttributeName - The name of this key attribute.

KeyType - The role that the key attribute will assume:

HASH - partition key

RANGE - sort key

The partition key of an item is also known as its hash attribute. The
term \"hash attribute\" derives from the DynamoDB usage of an internal
hash function to evenly distribute data items across partitions, based
on their partition key values.

The sort key of an item is also known as its range attribute. The term
\"range attribute\" derives from the way DynamoDB stores items with the
same partition key physically close together, in sorted order by the
sort key value.

For a simple primary key (partition key), you must provide exactly one
element with a KeyType of HASH.

For a composite primary key (partition key and sort key), you must
provide exactly two elements, in this order: The first element must have
a KeyType of HASH, and the second element must have a KeyType of
RANGE.

One or more local secondary indexes (the maximum is 5) to be created on
the table. Each index is scoped to a given partition key value. There is
a 10 GB size limit per partition key value; otherwise, the size of a
local secondary index is unconstrained.

Each local secondary index in the array includes the following:

IndexName - The name of the local secondary index. Must be unique
only for this table.

KeySchema - Specifies the key schema for the local secondary index.
The key schema must begin with the same partition key as the table.

Projection - Specifies attributes that are copied (projected) from
the table into the index. These are in addition to the primary key
attributes and index key attributes, which are automatically
projected. Each attribute specification is composed of:

ProjectionType - One of the following:

KEYS_ONLY - Only the index and primary keys are projected into
the index.

INCLUDE - Only the specified table attributes are projected into
the index. The list of projected attributes is in
NonKeyAttributes.

ALL - All of the table attributes are projected into the index.

NonKeyAttributes - A list of one or more non-key attribute names
that are projected into the secondary index. The total count of
attributes provided in NonKeyAttributes, summed across all of the
secondary indexes, must not exceed 100. If you project the same
attribute into two different indexes, this counts as two distinct
attributes when determining the total.

One or more global secondary indexes (the maximum is 20) to be created
on the table. Each global secondary index in the array includes the
following:

IndexName - The name of the global secondary index. Must be unique
only for this table.

KeySchema - Specifies the key schema for the global secondary index.

Projection - Specifies attributes that are copied (projected) from
the table into the index. These are in addition to the primary key
attributes and index key attributes, which are automatically
projected. Each attribute specification is composed of:

ProjectionType - One of the following:

KEYS_ONLY - Only the index and primary keys are projected into
the index.

INCLUDE - Only the specified table attributes are projected into
the index. The list of projected attributes is in
NonKeyAttributes.

ALL - All of the table attributes are projected into the index.

NonKeyAttributes - A list of one or more non-key attribute names
that are projected into the secondary index. The total count of
attributes provided in NonKeyAttributes, summed across all of the
secondary indexes, must not exceed 100. If you project the same
attribute into two different indexes, this counts as two distinct
attributes when determining the total.

Returns a Collection of Table
resources. No API requests are made until you call an enumerable method on the
collection. Client#list_tables will be called multiple times until every
Table has been yielded.