Amazon Kinesis is a fully managed streaming data service. You can continuously add various types of data such as clickstreams, application logs, and social media to an Amazon Kinesis stream from hundreds of thousands of sources. Within seconds, the data will be available for your Amazon Kinesis Applications to read and process from the stream.

Q: What does Amazon Kinesis manage on my behalf?

Amazon Kinesis manages the infrastructure, storage, networking, and configuration needed to stream your data at the level of your data throughput. You do not have to worry about provisioning, deployment, ongoing-maintenance of hardware, software, or other services for your data streams. In addition, Amazon Kinesis synchronously replicates data across three facilities in an AWS Region, providing high availability and data durability.

Q: What can I do with Amazon Kinesis?

Amazon Kinesis is useful for rapidly moving data off data producers and then continuously processing the data, be it to transform the data before emitting to a data store, run real-time metrics and analytics, or derive more complex data streams for further processing. The following are typical scenarios for using Amazon Kinesis:

Accelerated log and data feed intake: Instead of waiting to batch up the data, you can have your data producers push data to an Amazon Kinesis stream as soon as the data is produced, preventing data loss in case of data producer failures. For example, system and application logs can be continuously added to a stream and be available for processing within seconds.

Real-time metrics and reporting: You can extract metrics and generate reports from Amazon Kinesis stream data in real-time. For example, your Amazon Kinesis Application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive data batches.

Real-time data analytics: With Amazon Kinesis, you can run real-time streaming data analytics. For example, you can add clickstreams to your Amazon Kinesis stream and have your Amazon Kinesis Application run analytics in real-time, enabling you to gain insights out of your data at a scale of minutes instead of hours or days.

Complex stream processing: You can create Directed Acyclic Graphs (DAGs) of Amazon Kinesis Applications and data streams. In this scenario, one or more Amazon Kinesis Applications can add data to another Amazon Kinesis stream for further processing, enabling successive stages of stream processing.

Q: How do I use Amazon Kinesis?

After you sign up for Amazon Web Services, you can start using Amazon Kinesis by:

The throughput of an Amazon Kinesis stream is designed to scale without limits via increasing the number of shards within a stream. However, there are certain limits you should keep in mind while using Amazon Kinesis:

Records of a stream are accessible for up to 24 hours from the time they are added to the stream.

The maximum size of a data blob (the data payload before Base64-encoding) within one record is 50 kilobytes (KB).

Q: When should I use Amazon Kinesis, and when should I use Amazon SQS?

We recommend Amazon Kinesis for use cases with requirements that are similar to the following:

Routing related records to the same record processor (as in streaming MapReduce). For example, counting and aggregation are simpler when all records for a given key are routed to the same record processor.

Ordering of records. For example, you want to transfer log data from the application host to the processing/archival host while maintaining the order of log statements.

Ability for multiple applications to consume the same stream concurrently. For example, you have one application that updates a real-time dashboard and another that archives data to Amazon Redshift. You want both applications to consume data from the same stream concurrently and independently.

Ability to consume records in the same order a few hours later. For example, you have a billing application and an audit application that runs a few hours behind the billing application. Because Amazon Kinesis stores data for up to 24 hours, you can run the audit application up to 24 hours behind the billing application.

We recommend Amazon SQS for use cases with requirements that are similar to the following:

Messaging semantics (such as message-level ack/fail) and visibility timeout. For example, you have a queue of work items and want to track the successful completion of each item independently. Amazon SQS tracks the ack/fail, so the application does not have to maintain a persistent checkpoint/cursor. Amazon SQS will delete acked messages and redeliver failed messages after a configured visibility timeout.

Individual message delay. For example, you have a job queue and need to schedule individual jobs with a delay. With Amazon SQS, you can configure individual messages to have a delay of up to 15 minutes.

Dynamically increasing concurrency/throughput at read time. For example, you have a work queue and want to add more readers until the backlog is cleared. With Amazon Kinesis, you can scale up to a sufficient number of shards (note, however, that you'll need to provision enough shards ahead of time).

Leveraging Amazon SQS’s ability to scale transparently. For example, you buffer requests and the load changes as a result of occasional load spikes or the natural growth of your business. Because each buffered request can be processed independently, Amazon SQS can scale transparently to handle the load without any provisioning instructions from you.

Shard is the base throughput unit of an Amazon Kinesis stream. One shard provides a capacity of 1MB/sec data input and 2MB/sec data output. One shard can support up to 1000 PUT records per second. You will specify the number of shards needed when you create a stream. For example, you can create a stream with two shards. This stream has a throughput of 2MB/sec data input and 4MB/sec data output, and allows up to 2000 PUT records per second. You can dynamically add or remove shards from your stream as your data throughput changes via Amazon Kinesis Resharding.

Q: What is a record?

A record is the unit of data stored in an Amazon Kinesis stream. A record is composed of a sequence number, partition key, and data blob. Data blob is the data of interest your data producer adds to a stream. The maximum size of a data blob (the data payload before Base64-encoding) is 50 kilobytes (KB).

Q: What is a partition key?

Partition key is used to segregate and route records to different shards of a stream. A partition key is specified by your data producer while adding data to an Amazon Kinesis stream. For example, assuming you have a stream with two shards (shard 1 and shard 2). You can configure your data producer to use two partition keys (key A and key B) so that all records with key A are added to shard 1 and all records with key B are added to shard 2. For more information about partition keys, see Partition Keys.

Q: What is a sequence number?

A sequence number is a unique identifier for each record. Sequence number is assigned by Amazon Kinesis when a data producer calls PutRecord or PutRecordsoperation to add data to an Amazon Kinesis stream. Sequence numbers for the same partition key generally increase over time; the longer the time period between PutRecord or PutRecords requests, the larger the sequence numbers become. For more information about sequence number, see Sequence Numbers.

After you sign up for Amazon Web Services, you can create an Amazon Kinesis stream through either Amazon Kinesis Management Console or CreateStream operation.

Q: How do I decide the throughput of my Amazon Kinesis stream?

The throughput of an Amazon Kinesis stream is determined by the number of shards within the stream. Follow the steps below to estimate the initial number of shards your stream needs. Note that you can dynamically adjust the number of shards within your stream via Amazon Kinesis Resharding after the stream is created.

Estimate the average size of the record written to the stream in kilobytes (KB), rounded up to the nearest 1 KB. (average_data_size_in_KB)

Estimate the number of records written to the stream per second. (number_of_records_per_second)

Q: What is the minimum throughput I can request for my Amazon Kinesis stream?

The throughput of an Amazon Kinesis stream scales by unit of shard. One single shard is the smallest throughput of a stream, which provides 1MB/sec data input and 2MB/sec data output.

Q: What is the maximum throughput I can request for my Amazon Kinesis stream?

The throughput of an Amazon Kinesis stream is designed to scale without limits. By default, each account can provision 10 shards per region. You can use the Amazon Kinesis Limits form to request more than 10 shards within a single region.

Q: How can record size affect the throughput of my Amazon Kinesis stream?

A shard provides 1MB/sec data input rate and supports up to 1000 PUT records per sec. Therefore, if the record size is less than 1KB, the actual data input rate of a shard will be less than 1MB/sec, limited by the maximum number of PUT records per second.

Amazon Kinesis PutRecord and PutRecords operations are used for adding data to an Amazon Kinesis stream. After you create a stream, you need to configure your data producers to continuously call PutRecord or PutRecords operation. For more information about PutRecord and PutRecords operations, see PutRecordandPutRecords.

Q: What programming languages or platforms can I use to access Amazon Kinesis API?

Amazon Kinesis API is available in Amazon Web Services SDKs. For a list of programming languages or platforms for Amazon Web Services SDKs, see Tools for Amazon Web Services.

Q: What happens if the capacity limits of an Amazon Kinesis stream are exceeded while the data producer adds data to the stream?

The capacity limits of an Amazon Kinesis stream are defined by the number of shards within the stream. The limits can be exceeded by either data throughput or the number of PUT records. While the capacity limits are exceeded, the put data call will be rejected with a ProvisionedThroughputExceeded exception. If this is due to a temporary rise of the stream’s input data rate, retry by the data producer will eventually lead to completion of the requests. If this is due to a sustained rise of the stream’s input data rate, you should increase the number of shards within your stream to provide enough capacity for the put data calls to consistently succeed. In both cases, Amazon CloudWatch metrics allow you to learn about the change of the stream’s input data rate and the occurrence of ProvisionedThroughputExceeded exceptions.

Q: What data is counted against the data throughput of an Amazon Kinesis stream during a PutRecord or PutRecords call?

Your data blob, partition key, and stream name are required parameters of a PutRecord or PutRecordscall. The size of your data blob (before Base64 encoding) and partition key will be counted against the data throughput of your Amazon Kinesis stream, which is determined by the number of shards within the stream.

Amazon Kinesis Client Library (KCL) for Java | Python | Ruby is a pre-built library that helps you easily build Amazon Kinesis Applications for reading and processing data from an Amazon Kinesis stream. KCL handles complex issues such as adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, and processing data with fault-tolerance. KCL enables you to focus on business logic while building applications.

Q: What is Amazon Kinesis Connector Library?

Amazon Kinesis Connector Library is a pre-built library that helps you easily integrate Amazon Kinesis with other AWS services and third-party tools. Amazon Kinesis Client Library (KCL) for Java | Python | Ruby is required for using Amazon Kinesis Connector Library. The current version of this library provides connectors to Amazon DynamoDB, Amazon Redshift, Amazon S3, and Elasticsearch. The library also includes sample connectors of each type, plus Apache Ant build files for running the samples.

Q: What is Amazon Kinesis Storm Spout?

Amazon Kinesis Storm Spout is a pre-built library that helps you easily integrate Amazon Kinesis with Apache Storm. The current version of Amazon Kinesis Storm Spout fetches data from Amazon Kinesis stream and emits it as tuples. You will add the spout to your Storm topology to leverage Amazon Kinesis as a reliable, scalable, stream capture, storage, and replay service.

Q: Do I have to use Amazon Kinesis Client Library (KCL) for my Amazon Kinesis Application?

No, you can also use Amazon Kinesis API to build your Amazon Kinesis Application. However, we recommend using Amazon Kinesis Client Library (KCL) for Java | Python | Ruby if applicable because it performs heavy-lifting tasks associated with distributed stream processing, making it more productive to develop applications.

Q: What is a worker and a record processor generated by Amazon Kinesis Client Library (KCL)?

An Amazon Kinesis Application can have multiple application instances and a worker is the processing unit that maps to each application instance. A record processor is the processing unit that processes data from a shard of an Amazon Kinesis stream. One worker maps to one or more record processors. One record processor maps to one shard and processes records from that shard.

At startup, an application calls into Amazon Kinesis Client Library (KCL) for Java | Python | Ruby to instantiate a worker. This call provides KCL with configuration information for the application, such as the stream name and AWS credentials. This call also passes a reference to an IRecordProcessorFactory implementation. KCL uses this factory to create new record processors as needed to process data from the stream. KCL communicates with these record processors using the IRecordProcessor interface.

Amazon Kinesis Client Library (KCL) for Java | Python | Ruby automatically creates an Amazon DynamoDB table for each Amazon Kinesis Application to track and maintain state information such as resharding events and sequence number checkpoints. The DynamoDB table shares the same name with the application so that you need to make sure your application name doesn’t conflict with any existing DynamoDB tables under the same account within the same region.

All workers associated with the same application name are assumed to be working together on the same Amazon Kinesis stream. If you run an additional instance of the same application code, but with a different application name, KCL treats the second instance as an entirely separate application also operating on the same stream.

Please note that your account will be charged for the costs associated with the Amazon DynamoDB table in addition to the costs associated with Amazon Kinesis.

Q: How can I automatically scale up the processing capacity of my Amazon Kinesis Application using Amazon Kinesis Client Library (KCL)?

You can create multiple instances of your Amazon Kinesis Application and have these application instances run across a set of Amazon EC2 instances that are part of an Auto Scaling group. While the processing demand increases, an Amazon EC2 instance running your application instance will be automatically instantiated. Amazon Kinesis Client Library (KCL) for Java | Python | Ruby will generate a worker for this new instance and automatically move record processors from overloaded existing instances to this new instance.

Q: Why does GetRecords call return empty result while there is data within my Amazon Kinesis stream?

One possible reason is that there is no record at the position specified by the current shard iterator. This could happen even if you are using TRIM_HORIZON as shard iterator type. An Amazon Kinesis stream represents a continuous stream of data. You should call GetRecords operation in a loop and the record will be returned when the shard iterator advances to the position where the record is stored.

Q: What happens if the capacity limits of an Amazon Kinesis stream are exceeded while Amazon Kinesis Application reads data from the stream?

The capacity limits of an Amazon Kinesis stream are defined by the number of shards within the stream. The limits can be exceeded by either data throughput or the number of read data calls. While the capacity limits are exceeded, the read data call will be rejected with a ProvisionedThroughputExceeded exception. If this is due to a temporary rise of the stream’s output data rate, retry by the Amazon Kinesis Application will eventually lead to completions of the requests. If this is due to a sustained rise of the stream’s output data rate, you should increase the number of shards within your stream to provide enough capacity for the read data calls to consistently succeed. In both cases, Amazon CloudWatch metrics allow you to learn about the change of the stream’s output data rate and the occurrence of ProvisionedThroughputExceeded exceptions.

You can change the throughput of an Amazon Kinesis stream by adjusting the number of shards within the stream (resharding). There are two types of resharding operations: shard split and shard merge. In a shard split, a single shard is divided into two shards, which increases the throughput of the stream. In a shard merge, two shards are merged into a single shard, which decreases the throughput of the stream. For more information about resharding, see Resharding a Stream.

Q: How often can I and how long does it take to change the throughput of my Amazon Kinesis stream?

A resharding operation such as shard split or shard merge takes a few seconds. You can only perform one resharding operation at a time. Therefore, for an Amazon Kinesis stream with only one shard, it takes a few seconds to double the throughput by splitting one shard. For a stream with 1000 shards, it takes 30K seconds (8.3 hours) to double the throughput by splitting 1000 shards. We recommend increasing the throughput of your stream ahead of the time when extra throughput is needed.

Q: Does Amazon Kinesis remain available when I change the throughput of my Amazon Kinesis stream via resharding?

Yes. You can continue adding data to and reading data from your Amazon Kinesis stream while resharding is performing to change the throughput of the stream.

Q: How do I monitor the operations and performance of my Amazon Kinesis stream?

Q: How do I effectively manage my Amazon Kinesis streams and the costs associated with these streams?

Amazon Kinesis allows you to tag your Amazon Kinesis streams for easier resource and cost management. A tag is a user-defined label expressed as a key-value pair that helps organize AWS resources. For example, you can tag your streams by cost centers so that you can categorize and track your Amazon Kinesis costs based on cost centers. For more information about Amazon Kinesis tagging, see Tagging Your Amazon Kinesis Streams.

Q: Does my PUT Record cost change by using PutRecords operation instead of PutRecord operation?

PUT Record charge is calculated based on the number of records added to your Amazon Kinesis stream. PUT Record cost is consistent when using PutRecordsoperation or PutRecordoperation. For example, you can add 10 records to your stream by using either one PutRecords call or 10 PutRecord call. The Put Record cost is the same for these two cases.

Q: Other than Amazon Kinesis costs, are there any other costs that might incur to my Amazon Kinesis usage?