Using the Client

To contact Amazon Kinesis Firehose with the SDK use the New function to create
a new service client. With that client you can make API requests to the service.
These clients are safe to use concurrently.

Internal call graph ▹

Internal call graph ▾

In the call graph viewer below, each node
is a function belonging to this package
and its children are the functions it
calls—perhaps dynamically.

The root nodes are the entry points of the
package: functions that may be called from
outside the package.
There may be non-exported or anonymous
functions among them if they are called
dynamically from another package.

Click a node to visit that function's source code.
From there you can visit its callers by
clicking its declaring func
token.

Functions may be omitted if they were
determined to be unreachable in the
particular programs or tests that were
analyzed.

const (
// ErrCodeConcurrentModificationException for service response error code// "ConcurrentModificationException".//// Another modification has already happened. Fetch VersionId again and use// it to update the destination.ErrCodeConcurrentModificationException = "ConcurrentModificationException"
// ErrCodeInvalidArgumentException for service response error code// "InvalidArgumentException".//// The specified input parameter has a value that is not valid.ErrCodeInvalidArgumentException = "InvalidArgumentException"
// ErrCodeLimitExceededException for service response error code// "LimitExceededException".//// You have already reached the limit for a requested resource.ErrCodeLimitExceededException = "LimitExceededException"
// ErrCodeResourceInUseException for service response error code// "ResourceInUseException".//// The resource is already in use and not available for this operation.ErrCodeResourceInUseException = "ResourceInUseException"
// ErrCodeResourceNotFoundException for service response error code// "ResourceNotFoundException".//// The specified resource could not be found.ErrCodeResourceNotFoundException = "ResourceNotFoundException"
// ErrCodeServiceUnavailableException for service response error code// "ServiceUnavailableException".//// The service is unavailable. Back off and retry the operation. If you continue// to see the exception, throughput limits for the delivery stream may have// been exceeded. For more information about limits and how to request an increase,// see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html).ErrCodeServiceUnavailableException = "ServiceUnavailableException"
)

type BufferingHints struct {
// Buffer incoming data for the specified period of time, in seconds, before// delivering it to the destination. The default value is 300.
IntervalInSeconds *int64 `min:"60" type:"integer"`
// Buffer incoming data to the specified size, in MBs, before delivering it// to the destination. The default value is 5.//// We recommend setting this parameter to a value greater than the amount of// data you typically ingest into the delivery stream in 10 seconds. For example,// if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
SizeInMBs *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

Describes hints for the buffering to perform before delivering data to the
destination. These options are treated as hints, and therefore Kinesis Data
Firehose might choose to use different values when it is optimal.

type CopyCommand struct {
// Optional parameters to use with the Amazon Redshift COPY command. For more// information, see the "Optional Parameters" section of Amazon Redshift COPY// command (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html). Some// possible examples that would apply to Kinesis Data Firehose are as follows://// delimiter '\t' lzop; - fields are delimited with "\t" (TAB character) and// compressed using lzop.//// delimiter '|' - fields are delimited with "|" (this is the default delimiter).//// delimiter '|' escape - the delimiter should be escaped.//// fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6'// - fields are fixed width in the source, with each width specified after every// column in the table.//// JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the path// specified is the format of the data.//// For more examples, see Amazon Redshift COPY command examples (http://docs.aws.amazon.com/redshift/latest/dg/r_COPY_command_examples.html).
CopyOptions *string `type:"string"`
// A comma-separated list of column names.
DataTableColumns *string `type:"string"`
// The name of the target table. The table must already exist in the database.//// DataTableName is a required field
DataTableName *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type CreateDeliveryStreamInput struct {
// The name of the delivery stream. This name must be unique per AWS account// in the same AWS Region. If the delivery streams are in different accounts// or different Regions, you can have multiple delivery streams with the same// name.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The delivery stream type. This parameter can be one of the following values://// * DirectPut: Provider applications access the delivery stream directly.//// * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream// as a source.
DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"`
// The destination in Amazon ES. You can specify only one destination.
ElasticsearchDestinationConfiguration *ElasticsearchDestinationConfiguration `type:"structure"`
// The destination in Amazon S3. You can specify only one destination.
ExtendedS3DestinationConfiguration *ExtendedS3DestinationConfiguration `type:"structure"`
// When a Kinesis data stream is used as the source for the delivery stream,// a KinesisStreamSourceConfiguration containing the Kinesis data stream Amazon// Resource Name (ARN) and the role ARN for the source stream.
KinesisStreamSourceConfiguration *KinesisStreamSourceConfiguration `type:"structure"`
// The destination in Amazon Redshift. You can specify only one destination.
RedshiftDestinationConfiguration *RedshiftDestinationConfiguration `type:"structure"`
// [Deprecated] The destination in Amazon S3. You can specify only one destination.//// Deprecated: S3DestinationConfiguration has been deprecated
S3DestinationConfiguration *S3DestinationConfiguration `deprecated:"true" type:"structure"`
// The destination in Splunk. You can specify only one destination.
SplunkDestinationConfiguration *SplunkDestinationConfiguration `type:"structure"`
// A set of tags to assign to the delivery stream. A tag is a key-value pair// that you can define and assign to AWS resources. Tags are metadata. For example,// you can add friendly names and descriptions or other types of information// that can help you distinguish the delivery stream. For more information about// tags, see Using Cost Allocation Tags (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)// in the AWS Billing and Cost Management User Guide.//// You can specify up to 50 tags when creating a delivery stream.
Tags []*Tag `min:"1" type:"list"`
// contains filtered or unexported fields
}

type DataFormatConversionConfiguration struct {
// Defaults to true. Set it to false if you want to disable format conversion// while preserving the configuration details.
Enabled *bool `type:"boolean"`
// Specifies the deserializer that you want Kinesis Data Firehose to use to// convert the format of your data from JSON.
InputFormatConfiguration *InputFormatConfiguration `type:"structure"`
// Specifies the serializer that you want Kinesis Data Firehose to use to convert// the format of your data to the Parquet or ORC format.
OutputFormatConfiguration *OutputFormatConfiguration `type:"structure"`
// Specifies the AWS Glue Data Catalog table that contains the column information.
SchemaConfiguration *SchemaConfiguration `type:"structure"`
// contains filtered or unexported fields
}

Specifies that you want Kinesis Data Firehose to convert data from the JSON
format to the Parquet or ORC format before writing it to Amazon S3. Kinesis
Data Firehose uses the serializer and deserializer that you specify, in addition
to the column information from the AWS Glue table, to deserialize your input
data from JSON and then serialize it to the Parquet or ORC format. For more
information, see Kinesis Data Firehose Record Format Conversion (https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html).

type DeliveryStreamDescription struct {
// The date and time that the delivery stream was created.
CreateTimestamp *time.Time `type:"timestamp"`
// The Amazon Resource Name (ARN) of the delivery stream. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// DeliveryStreamARN is a required field
DeliveryStreamARN *string `min:"1" type:"string" required:"true"`
// Indicates the server-side encryption (SSE) status for the delivery stream.
DeliveryStreamEncryptionConfiguration *DeliveryStreamEncryptionConfiguration `type:"structure"`
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The status of the delivery stream.//// DeliveryStreamStatus is a required field
DeliveryStreamStatus *string `type:"string" required:"true" enum:"DeliveryStreamStatus"`
// The delivery stream type. This can be one of the following values://// * DirectPut: Provider applications access the delivery stream directly.//// * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream// as a source.//// DeliveryStreamType is a required field
DeliveryStreamType *string `type:"string" required:"true" enum:"DeliveryStreamType"`
// The destinations.//// Destinations is a required field
Destinations []*DestinationDescription `type:"list" required:"true"`
// Indicates whether there are more destinations available to list.//// HasMoreDestinations is a required field
HasMoreDestinations *bool `type:"boolean" required:"true"`
// The date and time that the delivery stream was last updated.
LastUpdateTimestamp *time.Time `type:"timestamp"`
// If the DeliveryStreamType parameter is KinesisStreamAsSource, a SourceDescription// object describing the source Kinesis data stream.
Source *SourceDescription `type:"structure"`
// Each time the destination is updated for a delivery stream, the version ID// is changed, and the current version ID is required when updating the destination.// This is so that the service knows it is applying the changes to the correct// version of the delivery stream.//// VersionId is a required field
VersionId *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type DeliveryStreamEncryptionConfiguration struct {
// For a full description of the different values of this status, see StartDeliveryStreamEncryption// and StopDeliveryStreamEncryption.
Status *string `type:"string" enum:"DeliveryStreamEncryptionStatus"`
// contains filtered or unexported fields
}

Indicates the server-side encryption (SSE) status for the delivery stream.

type DescribeDeliveryStreamInput struct {
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The ID of the destination to start returning the destination information.// Kinesis Data Firehose supports one destination per delivery stream.
ExclusiveStartDestinationId *string `min:"1" type:"string"`
// The limit on the number of destinations to return. You can have one destination// per delivery stream.
Limit *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

type Deserializer struct {
// The native Hive / HCatalog JsonSerDe. Used by Kinesis Data Firehose for deserializing// data, which means converting it from the JSON format in preparation for serializing// it to the Parquet or ORC format. This is one of two deserializers you can// choose, depending on which one offers the functionality you need. The other// option is the OpenX SerDe.
HiveJsonSerDe *HiveJsonSerDe `type:"structure"`
// The OpenX SerDe. Used by Kinesis Data Firehose for deserializing data, which// means converting it from the JSON format in preparation for serializing it// to the Parquet or ORC format. This is one of two deserializers you can choose,// depending on which one offers the functionality you need. The other option// is the native Hive / HCatalog JsonSerDe.
OpenXJsonSerDe *OpenXJsonSerDe `type:"structure"`
// contains filtered or unexported fields
}

type ElasticsearchBufferingHints struct {
// Buffer incoming data for the specified period of time, in seconds, before// delivering it to the destination. The default value is 300 (5 minutes).
IntervalInSeconds *int64 `min:"60" type:"integer"`
// Buffer incoming data to the specified size, in MBs, before delivering it// to the destination. The default value is 5.//// We recommend setting this parameter to a value greater than the amount of// data you typically ingest into the delivery stream in 10 seconds. For example,// if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
SizeInMBs *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

Describes the buffering to perform before delivering data to the Amazon ES
destination.

type ElasticsearchDestinationConfiguration struct {
// The buffering options. If no value is specified, the default values for ElasticsearchBufferingHints// are used.
BufferingHints *ElasticsearchBufferingHints `type:"structure"`
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The ARN of the Amazon ES domain. The IAM role must have permissions for DescribeElasticsearchDomain,// DescribeElasticsearchDomains, and DescribeElasticsearchDomainConfig after// assuming the role specified in RoleARN. For more information, see Amazon// Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// DomainARN is a required field
DomainARN *string `min:"1" type:"string" required:"true"`
// The Elasticsearch index name.//// IndexName is a required field
IndexName *string `min:"1" type:"string" required:"true"`
// The Elasticsearch index rotation period. Index rotation appends a timestamp// to the IndexName to facilitate the expiration of old data. For more information,// see Index Rotation for the Amazon ES Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-index-rotation).// The default value is OneDay.
IndexRotationPeriod *string `type:"string" enum:"ElasticsearchIndexRotationPeriod"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The retry behavior in case Kinesis Data Firehose is unable to deliver documents// to Amazon ES. The default value is 300 (5 minutes).
RetryOptions *ElasticsearchRetryOptions `type:"structure"`
// The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data// Firehose for calling the Amazon ES Configuration API and for indexing documents.// For more information, see Grant Kinesis Data Firehose Access to an Amazon// S3 Destination (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3)// and Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// RoleARN is a required field
RoleARN *string `min:"1" type:"string" required:"true"`
// Defines how documents should be delivered to Amazon S3. When it is set to// FailedDocumentsOnly, Kinesis Data Firehose writes any documents that could// not be indexed to the configured Amazon S3 destination, with elasticsearch-failed/// appended to the key prefix. When set to AllDocuments, Kinesis Data Firehose// delivers all incoming records to Amazon S3, and also writes failed documents// with elasticsearch-failed/ appended to the prefix. For more information,// see Amazon S3 Backup for the Amazon ES Destination (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#es-s3-backup).// Default value is FailedDocumentsOnly.
S3BackupMode *string `type:"string" enum:"ElasticsearchS3BackupMode"`
// The configuration for the backup Amazon S3 location.//// S3Configuration is a required field
S3Configuration *S3DestinationConfiguration `type:"structure" required:"true"`
// The Elasticsearch type name. For Elasticsearch 6.x, there can be only one// type per index. If you try to specify a new type for an existing index that// already has another type, Kinesis Data Firehose returns an error during run// time.//// TypeName is a required field
TypeName *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type ElasticsearchRetryOptions struct {
// After an initial failure to deliver to Amazon ES, the total amount of time// during which Kinesis Data Firehose retries delivery (including the first// attempt). After this time has elapsed, the failed documents are written to// Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero)// results in no retries.
DurationInSeconds *int64 `type:"integer"`
// contains filtered or unexported fields
}

type ExtendedS3DestinationConfiguration struct {
// The ARN of the S3 bucket. For more information, see Amazon Resource Names// (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// BucketARN is a required field
BucketARN *string `min:"1" type:"string" required:"true"`
// The buffering option.
BufferingHints *BufferingHints `type:"structure"`
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The compression format. If no value is specified, the default is UNCOMPRESSED.
CompressionFormat *string `type:"string" enum:"CompressionFormat"`
// The serializer, deserializer, and schema for converting data from the JSON// format to the Parquet or ORC format before writing it to Amazon S3.
DataFormatConversionConfiguration *DataFormatConversionConfiguration `type:"structure"`
// The encryption configuration. If no value is specified, the default is no// encryption.
EncryptionConfiguration *EncryptionConfiguration `type:"structure"`
// A prefix that Kinesis Data Firehose evaluates and adds to failed records// before writing them to S3. This prefix appears immediately following the// bucket name.
ErrorOutputPrefix *string `type:"string"`
// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered// Amazon S3 files. You can specify an extra prefix to be added in front of// the time format prefix. If the prefix ends with a slash, it appears as a// folder in the S3 bucket. For more information, see Amazon S3 Object Name// Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name)// in the Amazon Kinesis Data Firehose Developer Guide.
Prefix *string `type:"string"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The Amazon Resource Name (ARN) of the AWS credentials. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// RoleARN is a required field
RoleARN *string `min:"1" type:"string" required:"true"`
// The configuration for backup in Amazon S3.
S3BackupConfiguration *S3DestinationConfiguration `type:"structure"`
// The Amazon S3 backup mode.
S3BackupMode *string `type:"string" enum:"S3BackupMode"`
// contains filtered or unexported fields
}

type ExtendedS3DestinationDescription struct {
// The ARN of the S3 bucket. For more information, see Amazon Resource Names// (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// BucketARN is a required field
BucketARN *string `min:"1" type:"string" required:"true"`
// The buffering option.//// BufferingHints is a required field
BufferingHints *BufferingHints `type:"structure" required:"true"`
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The compression format. If no value is specified, the default is UNCOMPRESSED.//// CompressionFormat is a required field
CompressionFormat *string `type:"string" required:"true" enum:"CompressionFormat"`
// The serializer, deserializer, and schema for converting data from the JSON// format to the Parquet or ORC format before writing it to Amazon S3.
DataFormatConversionConfiguration *DataFormatConversionConfiguration `type:"structure"`
// The encryption configuration. If no value is specified, the default is no// encryption.//// EncryptionConfiguration is a required field
EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"`
// A prefix that Kinesis Data Firehose evaluates and adds to failed records// before writing them to S3. This prefix appears immediately following the// bucket name.
ErrorOutputPrefix *string `type:"string"`
// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered// Amazon S3 files. You can specify an extra prefix to be added in front of// the time format prefix. If the prefix ends with a slash, it appears as a// folder in the S3 bucket. For more information, see Amazon S3 Object Name// Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name)// in the Amazon Kinesis Data Firehose Developer Guide.
Prefix *string `type:"string"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The Amazon Resource Name (ARN) of the AWS credentials. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// RoleARN is a required field
RoleARN *string `min:"1" type:"string" required:"true"`
// The configuration for backup in Amazon S3.
S3BackupDescription *S3DestinationDescription `type:"structure"`
// The Amazon S3 backup mode.
S3BackupMode *string `type:"string" enum:"S3BackupMode"`
// contains filtered or unexported fields
}

This is an asynchronous operation that immediately returns. The initial status
of the delivery stream is CREATING. After the delivery stream is created,
its status is ACTIVE and it now accepts data. Attempts to send data to a
delivery stream that is not in the ACTIVE state cause an exception. To check
the state of a delivery stream, use DescribeDeliveryStream.

A Kinesis Data Firehose delivery stream can be configured to receive records
directly from providers using PutRecord or PutRecordBatch, or it can be configured
to use an existing Kinesis stream as its source. To specify a Kinesis data
stream as input, set the DeliveryStreamType parameter to KinesisStreamAsSource,
and provide the Kinesis stream Amazon Resource Name (ARN) and role ARN in
the KinesisStreamSourceConfiguration parameter.

A delivery stream is configured with a single destination: Amazon S3, Amazon
ES, Amazon Redshift, or Splunk. You must specify only one of the following
destination configuration parameters: ExtendedS3DestinationConfiguration,
S3DestinationConfiguration, ElasticsearchDestinationConfiguration, RedshiftDestinationConfiguration,
or SplunkDestinationConfiguration.

When you specify S3DestinationConfiguration, you can also provide the following
optional values: BufferingHints, EncryptionConfiguration, and CompressionFormat.
By default, if no BufferingHints value is provided, Kinesis Data Firehose
buffers data up to 5 MB or for 5 minutes, whichever condition is satisfied
first. BufferingHints is a hint, so there are some cases where the service
cannot adhere to these conditions strictly. For example, record boundaries
might be such that the size is a little over or under the configured buffering
size. By default, no encryption is performed. We strongly recommend that
you enable encryption to ensure secure data storage in Amazon S3.

A few notes about Amazon Redshift as a destination:

* An Amazon Redshift destination requires an S3 bucket as intermediate
location. Kinesis Data Firehose first delivers data to Amazon S3 and then
uses COPY syntax to load data into an Amazon Redshift table. This is specified
in the RedshiftDestinationConfiguration.S3Configuration parameter.
* The compression formats SNAPPY or ZIP cannot be specified in RedshiftDestinationConfiguration.S3Configuration
because the Amazon Redshift COPY operation that reads from the S3 bucket
doesn't support these compression formats.
* We strongly recommend that you use the user name and password you provide
exclusively with Kinesis Data Firehose, and that the permissions for the
account are restricted for Amazon Redshift INSERT permissions.

Kinesis Data Firehose assumes the IAM role that is configured as part of
the destination. The role should allow the Kinesis Data Firehose principal
to assume the role, and the role should have permissions that allow the service
to deliver the data. For more information, see Grant Kinesis Data Firehose
Access to an Amazon S3 Destination (http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html#using-iam-s3)
in the Amazon Kinesis Data Firehose Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.

CreateDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the CreateDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See CreateDeliveryStream for more information on using the CreateDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

CreateDeliveryStreamWithContext is the same as CreateDeliveryStream with the addition of
the ability to pass a context and additional request options.

See CreateDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

You can delete a delivery stream only if it is in ACTIVE or DELETING state,
and not in the CREATING state. While the deletion request is in process,
the delivery stream is in the DELETING state.

To check the state of a delivery stream, use DescribeDeliveryStream.

While the delivery stream is DELETING state, the service might continue to
accept the records, but it doesn't make any guarantees with respect to delivering
the data. Therefore, as a best practice, you should first stop any applications
that are sending records before deleting a delivery stream.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.

DeleteDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the DeleteDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DeleteDeliveryStream for more information on using the DeleteDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DeleteDeliveryStreamWithContext is the same as DeleteDeliveryStream with the addition of
the ability to pass a context and additional request options.

See DeleteDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Describes the specified delivery stream and gets the status. For example,
after your delivery stream is created, call DescribeDeliveryStream to see
whether the delivery stream is ACTIVE and therefore ready for data to be
sent to it.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DescribeDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the DescribeDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DescribeDeliveryStream for more information on using the DescribeDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DescribeDeliveryStreamWithContext is the same as DescribeDeliveryStream with the addition of
the ability to pass a context and additional request options.

See DescribeDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

The number of delivery streams might be too large to return using a single
call to ListDeliveryStreams. You can limit the number of delivery streams
returned, using the Limit parameter. To determine whether there are more
delivery streams to list, check the value of HasMoreDeliveryStreams in the
output. If there are more delivery streams to list, you can request them
by calling this operation again and setting the ExclusiveStartDeliveryStreamName
parameter to the name of the last delivery stream returned in the last call.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

ListDeliveryStreamsRequest generates a "aws/request.Request" representing the
client's request for the ListDeliveryStreams operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See ListDeliveryStreams for more information on using the ListDeliveryStreams
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

ListDeliveryStreamsWithContext is the same as ListDeliveryStreams with the addition of
the ability to pass a context and additional request options.

See ListDeliveryStreams for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.

ListTagsForDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the ListTagsForDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See ListTagsForDeliveryStream for more information on using the ListTagsForDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

ListTagsForDeliveryStreamWithContext is the same as ListTagsForDeliveryStream with the addition of
the ability to pass a context and additional request options.

See ListTagsForDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Writes a single data record into an Amazon Kinesis Data Firehose delivery
stream. To write multiple data records into a delivery stream, use PutRecordBatch.
Applications using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per
second, 5,000 records per second, or 5 MB per second. If you use PutRecord
and PutRecordBatch, the limits are an aggregate across these two operations
for each delivery stream. For more information about limits and how to request
an increase, see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html).

You must specify the name of the delivery stream and the data record when
using PutRecord. The data record consists of a data blob that can be up to
1,000 KB in size, and any kind of data. For example, it can be a segment
from a log file, geographic location data, website clickstream data, and
so on.

Kinesis Data Firehose buffers records before delivering them to the destination.
To disambiguate the data blobs at the destination, a common solution is to
use delimiters in the data, such as a newline (\n) or some other character
unique within the data. This allows the consumer application to parse individual
data items when reading the data from the destination.

The PutRecord operation returns a RecordId, which is a unique string assigned
to each record. Producer applications can use this ID for purposes such as
auditability and investigation.

If the PutRecord operation throws a ServiceUnavailableException, back off
and retry. If the exception persists, it is possible that the throughput
limits have been exceeded for the delivery stream.

Data records sent to Kinesis Data Firehose are stored for 24 hours from the
time they are added to a delivery stream as it tries to send the records
to the destination. If the destination is unreachable for more than 24 hours,
the data is no longer available.

Don't concatenate two or more base64 strings to form the data fields of your
records. Instead, concatenate the raw data, then perform base64 encoding.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeServiceUnavailableException "ServiceUnavailableException"
The service is unavailable. Back off and retry the operation. If you continue
to see the exception, throughput limits for the delivery stream may have
been exceeded. For more information about limits and how to request an increase,
see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html).

Writes multiple data records into a delivery stream in a single call, which
can achieve higher throughput per producer than when writing single records.
To write single data records into a delivery stream, use PutRecord. Applications
using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per
second, 5,000 records per second, or 5 MB per second. If you use PutRecord
and PutRecordBatch, the limits are an aggregate across these two operations
for each delivery stream. For more information about limits, see Amazon Kinesis
Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html).

Each PutRecordBatch request supports up to 500 records. Each record in the
request can be as large as 1,000 KB (before 64-bit encoding), up to a limit
of 4 MB for the entire request. These limits cannot be changed.

You must specify the name of the delivery stream and the data record when
using PutRecord. The data record consists of a data blob that can be up to
1,000 KB in size, and any kind of data. For example, it could be a segment
from a log file, geographic location data, website clickstream data, and
so on.

Kinesis Data Firehose buffers records before delivering them to the destination.
To disambiguate the data blobs at the destination, a common solution is to
use delimiters in the data, such as a newline (\n) or some other character
unique within the data. This allows the consumer application to parse individual
data items when reading the data from the destination.

The PutRecordBatch response includes a count of failed records, FailedPutCount,
and an array of responses, RequestResponses. Even if the PutRecordBatch call
succeeds, the value of FailedPutCount may be greater than 0, indicating that
there are records for which the operation didn't succeed. Each entry in the
RequestResponses array provides additional information about the processed
record. It directly correlates with a record in the request array using the
same ordering, from the top to the bottom. The response array always includes
the same number of records as the request array. RequestResponses includes
both successfully and unsuccessfully processed records. Kinesis Data Firehose
tries to process all records in each PutRecordBatch request. A single record
failure does not stop the processing of subsequent records.

A successfully processed record includes a RecordId value, which is unique
for the record. An unsuccessfully processed record includes ErrorCode and
ErrorMessage values. ErrorCode reflects the type of error, and is one of
the following values: ServiceUnavailableException or InternalFailure. ErrorMessage
provides more detailed information about the error.

If there is an internal server error or a timeout, the write might have completed
or it might have failed. If FailedPutCount is greater than 0, retry the request,
resending only those records that might have failed processing. This minimizes
the possible duplicate records and also reduces the total bytes sent (and
corresponding charges). We recommend that you handle any duplicates at the
destination.

If PutRecordBatch throws ServiceUnavailableException, back off and retry.
If the exception persists, it is possible that the throughput limits have
been exceeded for the delivery stream.

Data records sent to Kinesis Data Firehose are stored for 24 hours from the
time they are added to a delivery stream as it attempts to send the records
to the destination. If the destination is unreachable for more than 24 hours,
the data is no longer available.

Don't concatenate two or more base64 strings to form the data fields of your
records. Instead, concatenate the raw data, then perform base64 encoding.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeServiceUnavailableException "ServiceUnavailableException"
The service is unavailable. Back off and retry the operation. If you continue
to see the exception, throughput limits for the delivery stream may have
been exceeded. For more information about limits and how to request an increase,
see Amazon Kinesis Data Firehose Limits (http://docs.aws.amazon.com/firehose/latest/dev/limits.html).

PutRecordBatchRequest generates a "aws/request.Request" representing the
client's request for the PutRecordBatch operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See PutRecordBatch for more information on using the PutRecordBatch
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

PutRecordBatchWithContext is the same as PutRecordBatch with the addition of
the ability to pass a context and additional request options.

See PutRecordBatch for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

PutRecordRequest generates a "aws/request.Request" representing the
client's request for the PutRecord operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See PutRecord for more information on using the PutRecord
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

PutRecordWithContext is the same as PutRecord with the addition of
the ability to pass a context and additional request options.

See PutRecord for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

This operation is asynchronous. It returns immediately. When you invoke it,
Kinesis Data Firehose first sets the status of the stream to ENABLING, and
then to ENABLED. You can continue to read and write data to your stream while
its status is ENABLING, but the data is not encrypted. It can take up to
5 seconds after the encryption status changes to ENABLED before all records
written to the delivery stream are encrypted. To find out whether a record
or a batch of records was encrypted, check the response elements PutRecordOutput$Encrypted
and PutRecordBatchOutput$Encrypted, respectively.

To check the encryption state of a delivery stream, use DescribeDeliveryStream.

You can only enable SSE for a delivery stream that uses DirectPut as its
source.

The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations
have a combined limit of 25 calls per delivery stream per 24 hours. For example,
you reach the limit if you call StartDeliveryStreamEncryption 13 times and
StopDeliveryStreamEncryption 12 times for the same delivery stream in a 24-hour
period.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.

StartDeliveryStreamEncryptionRequest generates a "aws/request.Request" representing the
client's request for the StartDeliveryStreamEncryption operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartDeliveryStreamEncryption for more information on using the StartDeliveryStreamEncryption
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartDeliveryStreamEncryptionWithContext is the same as StartDeliveryStreamEncryption with the addition of
the ability to pass a context and additional request options.

See StartDeliveryStreamEncryption for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

This operation is asynchronous. It returns immediately. When you invoke it,
Kinesis Data Firehose first sets the status of the stream to DISABLING, and
then to DISABLED. You can continue to read and write data to your stream
while its status is DISABLING. It can take up to 5 seconds after the encryption
status changes to DISABLED before all records written to the delivery stream
are no longer subject to encryption. To find out whether a record or a batch
of records was encrypted, check the response elements PutRecordOutput$Encrypted
and PutRecordBatchOutput$Encrypted, respectively.

To check the encryption state of a delivery stream, use DescribeDeliveryStream.

The StartDeliveryStreamEncryption and StopDeliveryStreamEncryption operations
have a combined limit of 25 calls per delivery stream per 24 hours. For example,
you reach the limit if you call StartDeliveryStreamEncryption 13 times and
StopDeliveryStreamEncryption 12 times for the same delivery stream in a 24-hour
period.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.

StopDeliveryStreamEncryptionRequest generates a "aws/request.Request" representing the
client's request for the StopDeliveryStreamEncryption operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StopDeliveryStreamEncryption for more information on using the StopDeliveryStreamEncryption
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StopDeliveryStreamEncryptionWithContext is the same as StopDeliveryStreamEncryption with the addition of
the ability to pass a context and additional request options.

See StopDeliveryStreamEncryption for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Adds or updates tags for the specified delivery stream. A tag is a key-value
pair that you can define and assign to AWS resources. If you specify a tag
that already exists, the tag value is replaced with the value that you specify
in the request. Tags are metadata. For example, you can add friendly names
and descriptions or other types of information that can help you distinguish
the delivery stream. For more information about tags, see Using Cost Allocation
Tags (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)
in the AWS Billing and Cost Management User Guide.

Each delivery stream can have up to 50 tags.

This operation has a limit of five transactions per second per account.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.

TagDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the TagDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See TagDeliveryStream for more information on using the TagDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

TagDeliveryStreamWithContext is the same as TagDeliveryStream with the addition of
the ability to pass a context and additional request options.

See TagDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeLimitExceededException "LimitExceededException"
You have already reached the limit for a requested resource.

UntagDeliveryStreamRequest generates a "aws/request.Request" representing the
client's request for the UntagDeliveryStream operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See UntagDeliveryStream for more information on using the UntagDeliveryStream
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

UntagDeliveryStreamWithContext is the same as UntagDeliveryStream with the addition of
the ability to pass a context and additional request options.

See UntagDeliveryStream for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Use this operation to change the destination type (for example, to replace
the Amazon S3 destination with Amazon Redshift) or change the parameters
associated with a destination (for example, to change the bucket name of
the Amazon S3 destination). The update might not occur immediately. The target
delivery stream remains active while the configurations are updated, so data
writes to the delivery stream can continue during this process. The updated
configurations are usually effective within a few minutes.

Switching between Amazon ES and other services is not supported. For an Amazon
ES destination, you can only update to another Amazon ES destination.

If the destination type is the same, Kinesis Data Firehose merges the configuration
parameters specified with the destination configuration that already exists
on the delivery stream. If any of the parameters are not specified in the
call, the existing values are retained. For example, in the Amazon S3 destination,
if EncryptionConfiguration is not specified, then the existing EncryptionConfiguration
is maintained on the destination.

If the destination type is not the same, for example, changing the destination
from Amazon S3 to Amazon Redshift, Kinesis Data Firehose does not merge any
parameters. In this case, all parameters must be specified.

Kinesis Data Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions
and conflicting merges. This is a required field, and the service updates
the configuration only if the existing configuration has a version ID that
matches. After the update is applied successfully, the version ID is updated,
and can be retrieved using DescribeDeliveryStream. Use the new version ID
to set CurrentDeliveryStreamVersionId in the next call.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeInvalidArgumentException "InvalidArgumentException"
The specified input parameter has a value that is not valid.
* ErrCodeResourceInUseException "ResourceInUseException"
The resource is already in use and not available for this operation.
* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The specified resource could not be found.
* ErrCodeConcurrentModificationException "ConcurrentModificationException"
Another modification has already happened. Fetch VersionId again and use
it to update the destination.

UpdateDestinationRequest generates a "aws/request.Request" representing the
client's request for the UpdateDestination operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See UpdateDestination for more information on using the UpdateDestination
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

UpdateDestinationWithContext is the same as UpdateDestination with the addition of
the ability to pass a context and additional request options.

See UpdateDestination for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

type HiveJsonSerDe struct {
// Indicates how you want Kinesis Data Firehose to parse the date and timestamps// that may be present in your input data JSON. To specify these format strings,// follow the pattern syntax of JodaTime's DateTimeFormat format strings. For// more information, see Class DateTimeFormat (https://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html).// You can also use the special value millis to parse timestamps in epoch milliseconds.// If you don't specify a format, Kinesis Data Firehose uses java.sql.Timestamp::valueOf// by default.
TimestampFormats []*string `type:"list"`
// contains filtered or unexported fields
}

The native Hive / HCatalog JsonSerDe. Used by Kinesis Data Firehose for deserializing
data, which means converting it from the JSON format in preparation for serializing
it to the Parquet or ORC format. This is one of two deserializers you can
choose, depending on which one offers the functionality you need. The other
option is the OpenX SerDe.

type InputFormatConfiguration struct {
// Specifies which deserializer to use. You can choose either the Apache Hive// JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects// the request.
Deserializer *Deserializer `type:"structure"`
// contains filtered or unexported fields
}

Specifies the deserializer you want to use to convert the format of the input
data.

type ListDeliveryStreamsInput struct {
// The delivery stream type. This can be one of the following values://// * DirectPut: Provider applications access the delivery stream directly.//// * KinesisStreamAsSource: The delivery stream uses a Kinesis data stream// as a source.//// This parameter is optional. If this parameter is omitted, delivery streams// of all types are returned.
DeliveryStreamType *string `type:"string" enum:"DeliveryStreamType"`
// The list of delivery streams returned by this call to ListDeliveryStreams// will start with the delivery stream whose name comes alphabetically immediately// after the name you specify in ExclusiveStartDeliveryStreamName.
ExclusiveStartDeliveryStreamName *string `min:"1" type:"string"`
// The maximum number of delivery streams to list. The default value is 10.
Limit *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

type ListTagsForDeliveryStreamInput struct {
// The name of the delivery stream whose tags you want to list.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The key to use as the starting point for the list of tags. If you set this// parameter, ListTagsForDeliveryStream gets all tags that occur after ExclusiveStartTagKey.
ExclusiveStartTagKey *string `min:"1" type:"string"`
// The number of tags to return. If this number is less than the total number// of tags associated with the delivery stream, HasMoreTags is set to true in// the response. To list additional tags, set ExclusiveStartTagKey to the last// key in the response.
Limit *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

type ListTagsForDeliveryStreamOutput struct {
// If this is true in the response, more tags are available. To list the remaining// tags, set ExclusiveStartTagKey to the key of the last tag returned and call// ListTagsForDeliveryStream again.//// HasMoreTags is a required field
HasMoreTags *bool `type:"boolean" required:"true"`
// A list of tags associated with DeliveryStreamName, starting with the first// tag after ExclusiveStartTagKey and up to the specified Limit.//// Tags is a required field
Tags []*Tag `type:"list" required:"true"`
// contains filtered or unexported fields
}

type OpenXJsonSerDe struct {
// When set to true, which is the default, Kinesis Data Firehose converts JSON// keys to lowercase before deserializing them.
CaseInsensitive *bool `type:"boolean"`
// Maps column names to JSON keys that aren't identical to the column names.// This is useful when the JSON contains keys that are Hive keywords. For example,// timestamp is a Hive keyword. If you have a JSON key named timestamp, set// this parameter to {"ts": "timestamp"} to map this key to a column named ts.
ColumnToJsonKeyMappings map[string]*string `type:"map"`
// When set to true, specifies that the names of the keys include dots and that// you want Kinesis Data Firehose to replace them with underscores. This is// useful because Apache Hive does not allow dots in column names. For example,// if the JSON contains a key whose name is "a.b", you can define the column// name to be "a_b" when using this option.//// The default is false.
ConvertDotsInJsonKeysToUnderscores *bool `type:"boolean"`
// contains filtered or unexported fields
}

The OpenX SerDe. Used by Kinesis Data Firehose for deserializing data, which
means converting it from the JSON format in preparation for serializing it
to the Parquet or ORC format. This is one of two deserializers you can choose,
depending on which one offers the functionality you need. The other option
is the native Hive / HCatalog JsonSerDe.

type OrcSerDe struct {
// The Hadoop Distributed File System (HDFS) block size. This is useful if you// intend to copy the data from Amazon S3 to HDFS before querying. The default// is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value// for padding calculations.
BlockSizeBytes *int64 `min:"6.7108864e+07" type:"integer"`
// The column names for which you want Kinesis Data Firehose to create bloom// filters. The default is null.
BloomFilterColumns []*string `type:"list"`
// The Bloom filter false positive probability (FPP). The lower the FPP, the// bigger the Bloom filter. The default value is 0.05, the minimum is 0, and// the maximum is 1.
BloomFilterFalsePositiveProbability *float64 `type:"double"`
// The compression code to use over data blocks. The default is SNAPPY.
Compression *string `type:"string" enum:"OrcCompression"`
// Represents the fraction of the total number of non-null rows. To turn off// dictionary encoding, set this fraction to a number that is less than the// number of distinct keys in a dictionary. To always use dictionary encoding,// set this threshold to 1.
DictionaryKeyThreshold *float64 `type:"double"`
// Set this to true to indicate that you want stripes to be padded to the HDFS// block boundaries. This is useful if you intend to copy the data from Amazon// S3 to HDFS before querying. The default is false.
EnablePadding *bool `type:"boolean"`
// The version of the file to write. The possible values are V0_11 and V0_12.// The default is V0_12.
FormatVersion *string `type:"string" enum:"OrcFormatVersion"`
// A number between 0 and 1 that defines the tolerance for block padding as// a decimal fraction of stripe size. The default value is 0.05, which means// 5 percent of stripe size.//// For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the// default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB// for padding within the 256 MiB block. In such a case, if the available size// within the block is more than 3.2 MiB, a new, smaller stripe is inserted// to fit within that space. This ensures that no stripe crosses block boundaries// and causes remote reads within a node-local task.//// Kinesis Data Firehose ignores this parameter when OrcSerDe$EnablePadding// is false.
PaddingTolerance *float64 `type:"double"`
// The number of rows between index entries. The default is 10,000 and the minimum// is 1,000.
RowIndexStride *int64 `min:"1000" type:"integer"`
// The number of bytes in each stripe. The default is 64 MiB and the minimum// is 8 MiB.
StripeSizeBytes *int64 `min:"8.388608e+06" type:"integer"`
// contains filtered or unexported fields
}

A serializer to use for converting data to the ORC format before storing
it in Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/).

type OutputFormatConfiguration struct {
// Specifies which serializer to use. You can choose either the ORC SerDe or// the Parquet SerDe. If both are non-null, the server rejects the request.
Serializer *Serializer `type:"structure"`
// contains filtered or unexported fields
}

Specifies the serializer that you want Kinesis Data Firehose to use to convert
the format of your data before it writes it to Amazon S3.

type ParquetSerDe struct {
// The Hadoop Distributed File System (HDFS) block size. This is useful if you// intend to copy the data from Amazon S3 to HDFS before querying. The default// is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value// for padding calculations.
BlockSizeBytes *int64 `min:"6.7108864e+07" type:"integer"`
// The compression code to use over data blocks. The possible values are UNCOMPRESSED,// SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression// speed. Use GZIP if the compression ration is more important than speed.
Compression *string `type:"string" enum:"ParquetCompression"`
// Indicates whether to enable dictionary compression.
EnableDictionaryCompression *bool `type:"boolean"`
// The maximum amount of padding to apply. This is useful if you intend to copy// the data from Amazon S3 to HDFS before querying. The default is 0.
MaxPaddingBytes *int64 `type:"integer"`
// The Parquet page size. Column chunks are divided into pages. A page is conceptually// an indivisible unit (in terms of compression and encoding). The minimum value// is 64 KiB and the default is 1 MiB.
PageSizeBytes *int64 `min:"65536" type:"integer"`
// Indicates the version of row format to output. The possible values are V1// and V2. The default is V1.
WriterVersion *string `type:"string" enum:"ParquetWriterVersion"`
// contains filtered or unexported fields
}

type PutRecordBatchInput struct {
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// One or more records.//// Records is a required field
Records []*Record `min:"1" type:"list" required:"true"`
// contains filtered or unexported fields
}

type PutRecordBatchOutput struct {
// Indicates whether server-side encryption (SSE) was enabled during this operation.
Encrypted *bool `type:"boolean"`
// The number of records that might have failed processing. This number might// be greater than 0 even if the PutRecordBatch call succeeds. Check FailedPutCount// to determine whether there are records that you need to resend.//// FailedPutCount is a required field
FailedPutCount *int64 `type:"integer" required:"true"`
// The results array. For each record, the index of the response element is// the same as the index used in the request array.//// RequestResponses is a required field
RequestResponses []*PutRecordBatchResponseEntry `min:"1" type:"list" required:"true"`
// contains filtered or unexported fields
}

Contains the result for an individual record from a PutRecordBatch request.
If the record is successfully added to your delivery stream, it receives
a record ID. If the record fails to be added to your delivery stream, the
result includes an error code and an error message.

type PutRecordInput struct {
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The record.//// Record is a required field
Record *Record `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type Record struct {
// The data blob, which is base64-encoded when the blob is serialized. The maximum// size of the data blob, before base64-encoding, is 1,000 KiB.//// Data is automatically base64 encoded/decoded by the SDK.//// Data is a required field
Data []byte `type:"blob" required:"true"`
// contains filtered or unexported fields
}

type RedshiftRetryOptions struct {
// The length of time during which Kinesis Data Firehose retries delivery after// a failure, starting from the initial request and including the first attempt.// The default value is 3600 seconds (60 minutes). Kinesis Data Firehose does// not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery// attempt takes longer than the current value.
DurationInSeconds *int64 `type:"integer"`
// contains filtered or unexported fields
}

type S3DestinationConfiguration struct {
// The ARN of the S3 bucket. For more information, see Amazon Resource Names// (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// BucketARN is a required field
BucketARN *string `min:"1" type:"string" required:"true"`
// The buffering option. If no value is specified, BufferingHints object default// values are used.
BufferingHints *BufferingHints `type:"structure"`
// The CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The compression format. If no value is specified, the default is UNCOMPRESSED.//// The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift// destinations because they are not supported by the Amazon Redshift COPY operation// that reads from the S3 bucket.
CompressionFormat *string `type:"string" enum:"CompressionFormat"`
// The encryption configuration. If no value is specified, the default is no// encryption.
EncryptionConfiguration *EncryptionConfiguration `type:"structure"`
// A prefix that Kinesis Data Firehose evaluates and adds to failed records// before writing them to S3. This prefix appears immediately following the// bucket name.
ErrorOutputPrefix *string `type:"string"`
// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered// Amazon S3 files. You can specify an extra prefix to be added in front of// the time format prefix. If the prefix ends with a slash, it appears as a// folder in the S3 bucket. For more information, see Amazon S3 Object Name// Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name)// in the Amazon Kinesis Data Firehose Developer Guide.
Prefix *string `type:"string"`
// The Amazon Resource Name (ARN) of the AWS credentials. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// RoleARN is a required field
RoleARN *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type S3DestinationDescription struct {
// The ARN of the S3 bucket. For more information, see Amazon Resource Names// (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// BucketARN is a required field
BucketARN *string `min:"1" type:"string" required:"true"`
// The buffering option. If no value is specified, BufferingHints object default// values are used.//// BufferingHints is a required field
BufferingHints *BufferingHints `type:"structure" required:"true"`
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The compression format. If no value is specified, the default is UNCOMPRESSED.//// CompressionFormat is a required field
CompressionFormat *string `type:"string" required:"true" enum:"CompressionFormat"`
// The encryption configuration. If no value is specified, the default is no// encryption.//// EncryptionConfiguration is a required field
EncryptionConfiguration *EncryptionConfiguration `type:"structure" required:"true"`
// A prefix that Kinesis Data Firehose evaluates and adds to failed records// before writing them to S3. This prefix appears immediately following the// bucket name.
ErrorOutputPrefix *string `type:"string"`
// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered// Amazon S3 files. You can specify an extra prefix to be added in front of// the time format prefix. If the prefix ends with a slash, it appears as a// folder in the S3 bucket. For more information, see Amazon S3 Object Name// Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name)// in the Amazon Kinesis Data Firehose Developer Guide.
Prefix *string `type:"string"`
// The Amazon Resource Name (ARN) of the AWS credentials. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).//// RoleARN is a required field
RoleARN *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type S3DestinationUpdate struct {
// The ARN of the S3 bucket. For more information, see Amazon Resource Names// (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
BucketARN *string `min:"1" type:"string"`
// The buffering option. If no value is specified, BufferingHints object default// values are used.
BufferingHints *BufferingHints `type:"structure"`
// The CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The compression format. If no value is specified, the default is UNCOMPRESSED.//// The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift// destinations because they are not supported by the Amazon Redshift COPY operation// that reads from the S3 bucket.
CompressionFormat *string `type:"string" enum:"CompressionFormat"`
// The encryption configuration. If no value is specified, the default is no// encryption.
EncryptionConfiguration *EncryptionConfiguration `type:"structure"`
// A prefix that Kinesis Data Firehose evaluates and adds to failed records// before writing them to S3. This prefix appears immediately following the// bucket name.
ErrorOutputPrefix *string `type:"string"`
// The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered// Amazon S3 files. You can specify an extra prefix to be added in front of// the time format prefix. If the prefix ends with a slash, it appears as a// folder in the S3 bucket. For more information, see Amazon S3 Object Name// Format (http://docs.aws.amazon.com/firehose/latest/dev/basic-deliver.html#s3-object-name)// in the Amazon Kinesis Data Firehose Developer Guide.
Prefix *string `type:"string"`
// The Amazon Resource Name (ARN) of the AWS credentials. For more information,// see Amazon Resource Names (ARNs) and AWS Service Namespaces (https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html).
RoleARN *string `min:"1" type:"string"`
// contains filtered or unexported fields
}

type SchemaConfiguration struct {
// The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account// ID is used by default.
CatalogId *string `type:"string"`
// Specifies the name of the AWS Glue database that contains the schema for// the output data.
DatabaseName *string `type:"string"`
// If you don't specify an AWS Region, the default is the current Region.
Region *string `type:"string"`
// The role that Kinesis Data Firehose can use to access AWS Glue. This role// must be in the same account you use for Kinesis Data Firehose. Cross-account// roles aren't allowed.
RoleARN *string `type:"string"`
// Specifies the AWS Glue table that contains the column information that constitutes// your data schema.
TableName *string `type:"string"`
// Specifies the table version for the output data schema. If you don't specify// this version ID, or if you set it to LATEST, Kinesis Data Firehose uses the// most recent version. This means that any updates to the table are automatically// picked up.
VersionId *string `type:"string"`
// contains filtered or unexported fields
}

Specifies the schema to which you want Kinesis Data Firehose to configure
your data before it writes it to Amazon S3.

type Serializer struct {
// A serializer to use for converting data to the ORC format before storing// it in Amazon S3. For more information, see Apache ORC (https://orc.apache.org/docs/).
OrcSerDe *OrcSerDe `type:"structure"`
// A serializer to use for converting data to the Parquet format before storing// it in Amazon S3. For more information, see Apache Parquet (https://parquet.apache.org/documentation/latest/).
ParquetSerDe *ParquetSerDe `type:"structure"`
// contains filtered or unexported fields
}

type SplunkDestinationConfiguration struct {
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The amount of time that Kinesis Data Firehose waits to receive an acknowledgment// from Splunk after it sends it data. At the end of the timeout period, Kinesis// Data Firehose either tries to send the data again or considers it an error,// based on your retry settings.
HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"`
// The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends// your data.//// HECEndpoint is a required field
HECEndpoint *string `type:"string" required:"true"`
// This type can be either "Raw" or "Event."//// HECEndpointType is a required field
HECEndpointType *string `type:"string" required:"true" enum:"HECEndpointType"`
// This is a GUID that you obtain from your Splunk cluster when you create a// new HEC endpoint.//// HECToken is a required field
HECToken *string `type:"string" required:"true"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The retry behavior in case Kinesis Data Firehose is unable to deliver data// to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
RetryOptions *SplunkRetryOptions `type:"structure"`
// Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly,// Kinesis Data Firehose writes any data that could not be indexed to the configured// Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers// all incoming records to Amazon S3, and also writes failed documents to Amazon// S3. Default value is FailedDocumentsOnly.
S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"`
// The configuration for the backup Amazon S3 location.//// S3Configuration is a required field
S3Configuration *S3DestinationConfiguration `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type SplunkDestinationDescription struct {
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The amount of time that Kinesis Data Firehose waits to receive an acknowledgment// from Splunk after it sends it data. At the end of the timeout period, Kinesis// Data Firehose either tries to send the data again or considers it an error,// based on your retry settings.
HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"`
// The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends// your data.
HECEndpoint *string `type:"string"`
// This type can be either "Raw" or "Event."
HECEndpointType *string `type:"string" enum:"HECEndpointType"`
// A GUID you obtain from your Splunk cluster when you create a new HEC endpoint.
HECToken *string `type:"string"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The retry behavior in case Kinesis Data Firehose is unable to deliver data// to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
RetryOptions *SplunkRetryOptions `type:"structure"`
// Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly,// Kinesis Data Firehose writes any data that could not be indexed to the configured// Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers// all incoming records to Amazon S3, and also writes failed documents to Amazon// S3. Default value is FailedDocumentsOnly.
S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"`
// The Amazon S3 destination.>
S3DestinationDescription *S3DestinationDescription `type:"structure"`
// contains filtered or unexported fields
}

type SplunkDestinationUpdate struct {
// The Amazon CloudWatch logging options for your delivery stream.
CloudWatchLoggingOptions *CloudWatchLoggingOptions `type:"structure"`
// The amount of time that Kinesis Data Firehose waits to receive an acknowledgment// from Splunk after it sends data. At the end of the timeout period, Kinesis// Data Firehose either tries to send the data again or considers it an error,// based on your retry settings.
HECAcknowledgmentTimeoutInSeconds *int64 `min:"180" type:"integer"`
// The HTTP Event Collector (HEC) endpoint to which Kinesis Data Firehose sends// your data.
HECEndpoint *string `type:"string"`
// This type can be either "Raw" or "Event."
HECEndpointType *string `type:"string" enum:"HECEndpointType"`
// A GUID that you obtain from your Splunk cluster when you create a new HEC// endpoint.
HECToken *string `type:"string"`
// The data processing configuration.
ProcessingConfiguration *ProcessingConfiguration `type:"structure"`
// The retry behavior in case Kinesis Data Firehose is unable to deliver data// to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.
RetryOptions *SplunkRetryOptions `type:"structure"`
// Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly,// Kinesis Data Firehose writes any data that could not be indexed to the configured// Amazon S3 destination. When set to AllDocuments, Kinesis Data Firehose delivers// all incoming records to Amazon S3, and also writes failed documents to Amazon// S3. Default value is FailedDocumentsOnly.
S3BackupMode *string `type:"string" enum:"SplunkS3BackupMode"`
// Your update to the configuration of the backup Amazon S3 location.
S3Update *S3DestinationUpdate `type:"structure"`
// contains filtered or unexported fields
}

type StartDeliveryStreamEncryptionInput struct {
// The name of the delivery stream for which you want to enable server-side// encryption (SSE).//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type StopDeliveryStreamEncryptionInput struct {
// The name of the delivery stream for which you want to disable server-side// encryption (SSE).//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type TagDeliveryStreamInput struct {
// The name of the delivery stream to which you want to add the tags.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// A set of key-value pairs to use to create the tags.//// Tags is a required field
Tags []*Tag `min:"1" type:"list" required:"true"`
// contains filtered or unexported fields
}

type UntagDeliveryStreamInput struct {
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// A list of tag keys. Each corresponding tag is removed from the delivery stream.//// TagKeys is a required field
TagKeys []*string `min:"1" type:"list" required:"true"`
// contains filtered or unexported fields
}

type UpdateDestinationInput struct {
// Obtain this value from the VersionId result of DeliveryStreamDescription.// This value is required, and helps the service perform conditional operations.// For example, if there is an interleaving update and this value is null, then// the update destination fails. After the update is successful, the VersionId// value is updated. The service then performs a merge of the old configuration// with the new configuration.//// CurrentDeliveryStreamVersionId is a required field
CurrentDeliveryStreamVersionId *string `min:"1" type:"string" required:"true"`
// The name of the delivery stream.//// DeliveryStreamName is a required field
DeliveryStreamName *string `min:"1" type:"string" required:"true"`
// The ID of the destination.//// DestinationId is a required field
DestinationId *string `min:"1" type:"string" required:"true"`
// Describes an update for a destination in Amazon ES.
ElasticsearchDestinationUpdate *ElasticsearchDestinationUpdate `type:"structure"`
// Describes an update for a destination in Amazon S3.
ExtendedS3DestinationUpdate *ExtendedS3DestinationUpdate `type:"structure"`
// Describes an update for a destination in Amazon Redshift.
RedshiftDestinationUpdate *RedshiftDestinationUpdate `type:"structure"`
// [Deprecated] Describes an update for a destination in Amazon S3.//// Deprecated: S3DestinationUpdate has been deprecated
S3DestinationUpdate *S3DestinationUpdate `deprecated:"true" type:"structure"`
// Describes an update for a destination in Splunk.
SplunkDestinationUpdate *SplunkDestinationUpdate `type:"structure"`
// contains filtered or unexported fields
}