Internal call graph ▹

Internal call graph ▾

In the call graph viewer below, each node
is a function belonging to this package
and its children are the functions it
calls—perhaps dynamically.

The root nodes are the entry points of the
package: functions that may be called from
outside the package.
There may be non-exported or anonymous
functions among them if they are called
dynamically from another package.

Click a node to visit that function's source code.
From there you can visit its callers by
clicking its declaring func
token.

Functions may be omitted if they were
determined to be unreachable in the
particular programs or tests that were
analyzed.

Amazon Rekognition estimates an age range for faces detected in the input
image. Estimated age ranges can overlap. A face of a 5-year-old might have
an estimated range of 4-6, while the face of a 6-year-old might have an estimated
range of 4-8.

type BoundingBox struct {
// Height of the bounding box as a ratio of the overall image height.
Height *float64 `type:"float"`
// Left coordinate of the bounding box as a ratio of overall image width.
Left *float64 `type:"float"`
// Top coordinate of the bounding box as a ratio of overall image height.
Top *float64 `type:"float"`
// Width of the bounding box as a ratio of the overall image width.
Width *float64 `type:"float"`
// contains filtered or unexported fields
}

Identifies the bounding box around the face or text. The left (x-coordinate)
and top (y-coordinate) are coordinates representing the top and left sides
of the bounding box. Note that the upper-left corner of the image is the
origin (0,0).

The top and left values returned are ratios of the overall image size. For
example, if the input image is 700x200 pixels, and the top-left coordinate
of the bounding box is 350x50 pixels, the API returns a left value of 0.5
(350/700) and a top value of 0.25 (50/200).

The width and height values represent the dimensions of the bounding box
as a ratio of the overall image dimension. For example, if the input image
is 700x200 pixels, and the bounding box width is 70 pixels, the width returned
is 0.1.

The bounding box coordinates can have negative values. For example, if Amazon
Rekognition is able to detect a face that is at the image edge and is only
partially visible, the service can return coordinates that are outside the
image bounds and, depending on the image edge, you might get negative values
or values greater than 1 for the left or top values.

type Celebrity struct {
// Provides information about the celebrity's face, such as its location on// the image.
Face *ComparedFace `type:"structure"`
// A unique identifier for the celebrity.
Id *string `type:"string"`
// The confidence, in percentage, that Amazon Rekognition has that the recognized// face is the celebrity.
MatchConfidence *float64 `type:"float"`
// The name of the celebrity.
Name *string `type:"string"`
// An array of URLs pointing to additional information about the celebrity.// If there is no additional information about the celebrity, this list is empty.
Urls []*string `type:"list"`
// contains filtered or unexported fields
}

type CelebrityRecognition struct {
// Information about a recognized celebrity.
Celebrity *CelebrityDetail `type:"structure"`
// The time, in milliseconds from the start of the video, that the celebrity// was recognized.
Timestamp *int64 `type:"long"`
// contains filtered or unexported fields
}

Information about a detected celebrity and the time the celebrity was detected
in a stored video. For more information, see GetCelebrityRecognition in the
Amazon Rekognition Developer Guide.

type CompareFacesInput struct {
// The minimum level of confidence in the face matches that a match must meet// to be included in the FaceMatches array.
SimilarityThreshold *float64 `type:"float"`
// The input image as base64-encoded bytes or an S3 object. If you use the AWS// CLI to call Amazon Rekognition operations, passing base64-encoded image bytes// is not supported.//// SourceImage is a required field
SourceImage *Image `type:"structure" required:"true"`
// The target image as base64-encoded bytes or an S3 object. If you use the// AWS CLI to call Amazon Rekognition operations, passing base64-encoded image// bytes is not supported.//// TargetImage is a required field
TargetImage *Image `type:"structure" required:"true"`
// contains filtered or unexported fields
}

Provides information about a face in a target image that matches the source
image face analyzed by CompareFaces. The Face property contains the bounding
box of the face in the target image. The Similarity property is the confidence
that the source image face matches the face in the bounding box.

type CompareFacesOutput struct {
// An array of faces in the target image that match the source image face. Each// CompareFacesMatch object provides the bounding box, the confidence level// that the bounding box contains a face, and the similarity score for the face// in the bounding box and the face in the source image.
FaceMatches []*CompareFacesMatch `type:"list"`
// The face in the source image that was used for comparison.
SourceImageFace *ComparedSourceImageFace `type:"structure"`
// The orientation of the source image (counterclockwise direction). If your// application displays the source image, you can use this value to correct// image orientation. The bounding box coordinates returned in SourceImageFace// represent the location of the face before the image orientation is corrected.//// If the source image is in .jpeg format, it might contain exchangeable image// (Exif) metadata that includes the image's orientation. If the Exif metadata// for the source image populates the orientation field, the value of OrientationCorrection// is null. The SourceImageFace bounding box coordinates represent the location// of the face after Exif metadata is used to correct the orientation. Images// in .png format don't contain Exif metadata.
SourceImageOrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// The orientation of the target image (in counterclockwise direction). If your// application displays the target image, you can use this value to correct// the orientation of the image. The bounding box coordinates returned in FaceMatches// and UnmatchedFaces represent face locations before the image orientation// is corrected.//// If the target image is in .jpg format, it might contain Exif metadata that// includes the orientation of the image. If the Exif metadata for the target// image populates the orientation field, the value of OrientationCorrection// is null. The bounding box coordinates in FaceMatches and UnmatchedFaces represent// the location of the face after Exif metadata is used to correct the orientation.// Images in .png format don't contain Exif metadata.
TargetImageOrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// An array of faces in the target image that did not match the source image// face.
UnmatchedFaces []*ComparedFace `type:"list"`
// contains filtered or unexported fields
}

Type that describes the face Amazon Rekognition chose to compare with the
faces in the target. This contains a bounding box for the selected face and
confidence level that the bounding box contains a face. Note that Amazon
Rekognition selects the largest face in the source image for this comparison.

type ContentModerationDetection struct {
// The moderation label detected by in the stored video.
ModerationLabel *ModerationLabel `type:"structure"`
// Time, in milliseconds from the beginning of the video, that the moderation// label was detected.
Timestamp *int64 `type:"long"`
// contains filtered or unexported fields
}

type CreateCollectionOutput struct {
// Amazon Resource Name (ARN) of the collection. You can use this to manage// permissions on your resources.
CollectionArn *string `type:"string"`
// Version number of the face detection model associated with the collection// you are creating.
FaceModelVersion *string `type:"string"`
// HTTP status code indicating the result of the operation.
StatusCode *int64 `type:"integer"`
// contains filtered or unexported fields
}

type CreateStreamProcessorInput struct {
// Kinesis video stream stream that provides the source streaming video. If// you are using the AWS CLI, the parameter name is StreamProcessorInput.//// Input is a required field
Input *StreamProcessorInput `type:"structure" required:"true"`
// An identifier you assign to the stream processor. You can use Name to manage// the stream processor. For example, you can get the current status of the// stream processor by calling . Name is idempotent.//// Name is a required field
Name *string `min:"1" type:"string" required:"true"`
// Kinesis data stream stream to which Amazon Rekognition Video puts the analysis// results. If you are using the AWS CLI, the parameter name is StreamProcessorOutput.//// Output is a required field
Output *StreamProcessorOutput `type:"structure" required:"true"`
// ARN of the IAM role that allows access to the stream processor.//// RoleArn is a required field
RoleArn *string `type:"string" required:"true"`
// Face recognition input parameters to be used by the stream processor. Includes// the collection to use for face recognition and the face attributes to detect.//// Settings is a required field
Settings *StreamProcessorSettings `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type DeleteStreamProcessorInput struct {
// The name of the stream processor you want to delete.//// Name is a required field
Name *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type DescribeCollectionOutput struct {
// The Amazon Resource Name (ARN) of the collection.
CollectionARN *string `type:"string"`
// The number of milliseconds since the Unix epoch time until the creation of// the collection. The Unix epoch time is 00:00:00 Coordinated Universal Time// (UTC), Thursday, 1 January 1970.
CreationTimestamp *time.Time `type:"timestamp"`
// The number of faces that are indexed into the collection. To index faces// into a collection, use .
FaceCount *int64 `type:"long"`
// The version of the face model that's used by the collection for face detection.//// For more information, see Model Versioning in the Amazon Rekognition Developer// Guide.
FaceModelVersion *string `type:"string"`
// contains filtered or unexported fields
}

type DescribeStreamProcessorInput struct {
// Name of the stream processor for which you want information.//// Name is a required field
Name *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type DescribeStreamProcessorOutput struct {
// Date and time the stream processor was created
CreationTimestamp *time.Time `type:"timestamp"`
// Kinesis video stream that provides the source streaming video.
Input *StreamProcessorInput `type:"structure"`
// The time, in Unix format, the stream processor was last updated. For example,// when the stream processor moves from a running state to a failed state, or// when the user starts or stops the stream processor.
LastUpdateTimestamp *time.Time `type:"timestamp"`
// Name of the stream processor.
Name *string `min:"1" type:"string"`
// Kinesis data stream to which Amazon Rekognition Video puts the analysis results.
Output *StreamProcessorOutput `type:"structure"`
// ARN of the IAM role that allows access to the stream processor.
RoleArn *string `type:"string"`
// Face recognition input parameters that are being used by the stream processor.// Includes the collection to use for face recognition and the face attributes// to detect.
Settings *StreamProcessorSettings `type:"structure"`
// Current status of the stream processor.
Status *string `type:"string" enum:"StreamProcessorStatus"`
// Detailed status message about the stream processor.
StatusMessage *string `type:"string"`
// ARN of the stream processor.
StreamProcessorArn *string `type:"string"`
// contains filtered or unexported fields
}

type DetectFacesInput struct {
// An array of facial attributes you want to be returned. This can be the default// list of attributes or all attributes. If you don't specify a value for Attributes// or if you specify ["DEFAULT"], the API returns the following subset of facial// attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you// provide ["ALL"], all facial attributes are returned, but the operation takes// longer to complete.//// If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator// to determine which attributes to return (in this case, all attributes).
Attributes []*string `type:"list"`
// The input image as base64-encoded bytes or an S3 object. If you use the AWS// CLI to call Amazon Rekognition operations, passing base64-encoded image bytes// is not supported.//// Image is a required field
Image *Image `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type DetectFacesOutput struct {
// Details of each face found in the image.
FaceDetails []*FaceDetail `type:"list"`
// The orientation of the input image (counter-clockwise direction). If your// application displays the image, you can use this value to correct image orientation.// The bounding box coordinates returned in FaceDetails represent face locations// before the image orientation is corrected.//// If the input image is in .jpeg format, it might contain exchangeable image// (Exif) metadata that includes the image's orientation. If so, and the Exif// metadata for the input image populates the orientation field, the value of// OrientationCorrection is null. The FaceDetails bounding box coordinates represent// face locations after Exif metadata is used to correct the image orientation.// Images in .png format don't contain Exif metadata.
OrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// contains filtered or unexported fields
}

type DetectLabelsInput struct {
// The input image as base64-encoded bytes or an S3 object. If you use the AWS// CLI to call Amazon Rekognition operations, passing base64-encoded image bytes// is not supported.//// Image is a required field
Image *Image `type:"structure" required:"true"`
// Maximum number of labels you want the service to return in the response.// The service returns the specified number of highest confidence labels.
MaxLabels *int64 `type:"integer"`
// Specifies the minimum confidence level for the labels to return. Amazon Rekognition// doesn't return any labels with confidence lower than this specified value.//// If MinConfidence is not specified, the operation returns labels with a confidence// values greater than or equal to 50 percent.
MinConfidence *float64 `type:"float"`
// contains filtered or unexported fields
}

type DetectLabelsOutput struct {
// An array of labels for the real-world objects detected.
Labels []*Label `type:"list"`
// The orientation of the input image (counter-clockwise direction). If your// application displays the image, you can use this value to correct the orientation.// If Amazon Rekognition detects that the input image was rotated (for example,// by 90 degrees), it first corrects the orientation before detecting the labels.//// If the input image Exif metadata populates the orientation field, Amazon// Rekognition does not perform orientation correction and the value of OrientationCorrection// will be null.
OrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// contains filtered or unexported fields
}

type DetectModerationLabelsOutput struct {
// Array of detected Moderation labels and the time, in millseconds from the// start of the video, they were detected.
ModerationLabels []*ModerationLabel `type:"list"`
// contains filtered or unexported fields
}

type FaceDetail struct {
// The estimated age range, in years, for the face. Low represents the lowest// estimated age and High represents the highest estimated age.
AgeRange *AgeRange `type:"structure"`
// Indicates whether or not the face has a beard, and the confidence level in// the determination.
Beard *Beard `type:"structure"`
// Bounding box of the face. Default attribute.
BoundingBox *BoundingBox `type:"structure"`
// Confidence level that the bounding box contains a face (and not a different// object such as a tree). Default attribute.
Confidence *float64 `type:"float"`
// The emotions detected on the face, and the confidence level in the determination.// For example, HAPPY, SAD, and ANGRY.
Emotions []*Emotion `type:"list"`
// Indicates whether or not the face is wearing eye glasses, and the confidence// level in the determination.
Eyeglasses *Eyeglasses `type:"structure"`
// Indicates whether or not the eyes on the face are open, and the confidence// level in the determination.
EyesOpen *EyeOpen `type:"structure"`
// Gender of the face and the confidence level in the determination.
Gender *Gender `type:"structure"`
// Indicates the location of landmarks on the face. Default attribute.
Landmarks []*Landmark `type:"list"`
// Indicates whether or not the mouth on the face is open, and the confidence// level in the determination.
MouthOpen *MouthOpen `type:"structure"`
// Indicates whether or not the face has a mustache, and the confidence level// in the determination.
Mustache *Mustache `type:"structure"`
// Indicates the pose of the face as determined by its pitch, roll, and yaw.// Default attribute.
Pose *Pose `type:"structure"`
// Identifies image brightness and sharpness. Default attribute.
Quality *ImageQuality `type:"structure"`
// Indicates whether or not the face is smiling, and the confidence level in// the determination.
Smile *Smile `type:"structure"`
// Indicates whether or not the face is wearing sunglasses, and the confidence// level in the determination.
Sunglasses *Sunglasses `type:"structure"`
// contains filtered or unexported fields
}

Structure containing attributes of the face that the algorithm detected.

A FaceDetail object contains either the default facial attributes or all
facial attributes. The default attributes are BoundingBox, Confidence, Landmarks,
Pose, and Quality.

is the only Amazon Rekognition Video stored video operation that can return
a FaceDetail object with all attributes. To specify which attributes to return,
use the FaceAttributes input parameter for . The following Amazon Rekognition
Video operations return only the default attributes. The corresponding Start
operations don't have a FaceAttributes input parameter.

* GetCelebrityRecognition
* GetPersonTracking
* GetFaceSearch

The Amazon Rekognition Image and operations can return all facial attributes.
To specify which attributes to return, use the Attributes input parameter
for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

type FaceDetection struct {
// The face properties for the detected face.
Face *FaceDetail `type:"structure"`
// Time, in milliseconds from the start of the video, that the face was detected.
Timestamp *int64 `type:"long"`
// contains filtered or unexported fields
}

Information about a face detected in a video analysis request and the time
the face was detected in the video.

type GetCelebrityInfoInput struct {
// The ID for the celebrity. You get the celebrity ID from a call to the operation,// which recognizes celebrities in an image.//// Id is a required field
Id *string `type:"string" required:"true"`
// contains filtered or unexported fields
}

type GetCelebrityRecognitionInput struct {
// Job identifier for the required celebrity recognition analysis. You can get// the job identifer from a call to StartCelebrityRecognition.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there is more recognized// celebrities to retrieve), Amazon Rekognition Video returns a pagination token// in the response. You can use this pagination token to retrieve the next set// of celebrities.
NextToken *string `type:"string"`
// Sort to use for celebrities returned in Celebrities field. Specify ID to// sort by the celebrity identifier, specify TIMESTAMP to sort by the time the// celebrity was recognized.
SortBy *string `type:"string" enum:"CelebrityRecognitionSortBy"`
// contains filtered or unexported fields
}

type GetCelebrityRecognitionOutput struct {
// Array of celebrities recognized in the video.
Celebrities []*CelebrityRecognition `type:"list"`
// The current status of the celebrity recognition job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of celebrities.
NextToken *string `type:"string"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition Video analyzed. Videometadata// is returned in every page of paginated responses from a Amazon Rekognition// Video operation.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

type GetContentModerationInput struct {
// The identifier for the content moderation job. Use JobId to identify the// job in a subsequent call to GetContentModeration.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there is more data to retrieve),// Amazon Rekognition returns a pagination token in the response. You can use// this pagination token to retrieve the next set of content moderation labels.
NextToken *string `type:"string"`
// Sort to use for elements in the ModerationLabelDetections array. Use TIMESTAMP// to sort array elements by the time labels are detected. Use NAME to alphabetically// group elements for a label together. Within each label group, the array element// are sorted by detection confidence. The default sort is by TIMESTAMP.
SortBy *string `type:"string" enum:"ContentModerationSortBy"`
// contains filtered or unexported fields
}

type GetContentModerationOutput struct {
// The current status of the content moderation job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// The detected moderation labels and the time(s) they were detected.
ModerationLabels []*ContentModerationDetection `type:"list"`
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of moderation// labels.
NextToken *string `type:"string"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition analyzed. Videometadata// is returned in every page of paginated responses from GetContentModeration.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

type GetFaceDetectionInput struct {
// Unique identifier for the face detection job. The JobId is returned from// StartFaceDetection.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there are more faces to// retrieve), Amazon Rekognition Video returns a pagination token in the response.// You can use this pagination token to retrieve the next set of faces.
NextToken *string `type:"string"`
// contains filtered or unexported fields
}

type GetFaceDetectionOutput struct {
// An array of faces detected in the video. Each element contains a detected// face's details and the time, in milliseconds from the start of the video,// the face was detected.
Faces []*FaceDetection `type:"list"`
// The current status of the face detection job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// If the response is truncated, Amazon Rekognition returns this token that// you can use in the subsequent request to retrieve the next set of faces.
NextToken *string `type:"string"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition Video analyzed. Videometadata// is returned in every page of paginated responses from a Amazon Rekognition// video operation.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

type GetFaceSearchInput struct {
// The job identifer for the search request. You get the job identifier from// an initial call to StartFaceSearch.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there is more search results// to retrieve), Amazon Rekognition Video returns a pagination token in the// response. You can use this pagination token to retrieve the next set of search// results.
NextToken *string `type:"string"`
// Sort to use for grouping faces in the response. Use TIMESTAMP to group faces// by the time that they are recognized. Use INDEX to sort by recognized faces.
SortBy *string `type:"string" enum:"FaceSearchSortBy"`
// contains filtered or unexported fields
}

type GetFaceSearchOutput struct {
// The current status of the face search job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of search// results.
NextToken *string `type:"string"`
// An array of persons, , in the video whose face(s) match the face(s) in an// Amazon Rekognition collection. It also includes time information for when// persons are matched in the video. You specify the input collection in an// initial call to StartFaceSearch. Each Persons element includes a time the// person was matched, face match details (FaceMatches) for matching faces in// the collection, and person information (Person) for the matched person.
Persons []*PersonMatch `type:"list"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition analyzed. Videometadata// is returned in every page of paginated responses from a Amazon Rekognition// Video operation.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

type GetLabelDetectionInput struct {
// Job identifier for the label detection operation for which you want results// returned. You get the job identifer from an initial call to StartlabelDetection.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there are more labels to// retrieve), Amazon Rekognition Video returns a pagination token in the response.// You can use this pagination token to retrieve the next set of labels.
NextToken *string `type:"string"`
// Sort to use for elements in the Labels array. Use TIMESTAMP to sort array// elements by the time labels are detected. Use NAME to alphabetically group// elements for a label together. Within each label group, the array element// are sorted by detection confidence. The default sort is by TIMESTAMP.
SortBy *string `type:"string" enum:"LabelDetectionSortBy"`
// contains filtered or unexported fields
}

type GetLabelDetectionOutput struct {
// The current status of the label detection job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// An array of labels detected in the video. Each element contains the detected// label and the time, in milliseconds from the start of the video, that the// label was detected.
Labels []*LabelDetection `type:"list"`
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of labels.
NextToken *string `type:"string"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition Video analyzed. Videometadata// is returned in every page of paginated responses from a Amazon Rekognition// video operation.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

type GetPersonTrackingInput struct {
// The identifier for a job that tracks persons in a video. You get the JobId// from a call to StartPersonTracking.//// JobId is a required field
JobId *string `min:"1" type:"string" required:"true"`
// Maximum number of results to return per paginated call. The largest value// you can specify is 1000. If you specify a value greater than 1000, a maximum// of 1000 results is returned. The default value is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there are more persons to// retrieve), Amazon Rekognition Video returns a pagination token in the response.// You can use this pagination token to retrieve the next set of persons.
NextToken *string `type:"string"`
// Sort to use for elements in the Persons array. Use TIMESTAMP to sort array// elements by the time persons are detected. Use INDEX to sort by the tracked// persons. If you sort by INDEX, the array elements for each person are sorted// by detection confidence. The default sort is by TIMESTAMP.
SortBy *string `type:"string" enum:"PersonTrackingSortBy"`
// contains filtered or unexported fields
}

type GetPersonTrackingOutput struct {
// The current status of the person tracking job.
JobStatus *string `type:"string" enum:"VideoJobStatus"`
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of persons.
NextToken *string `type:"string"`
// An array of the persons detected in the video and the times they are tracked// throughout the video. An array element will exist for each time the person// is tracked.
Persons []*PersonDetection `type:"list"`
// If the job fails, StatusMessage provides a descriptive error message.
StatusMessage *string `type:"string"`
// Information about a video that Amazon Rekognition Video analyzed. Videometadata// is returned in every page of paginated responses from a Amazon Rekognition// Video operation.
VideoMetadata *VideoMetadata `type:"structure"`
// contains filtered or unexported fields
}

You pass image bytes to an Amazon Rekognition API operation by using the
Bytes property. For example, you would use the Bytes property to pass an
image loaded from a local file system. Image bytes passed by using the Bytes
property must be base64-encoded. Your code may not need to encode image bytes
if you are using an AWS SDK to call Amazon Rekognition API operations.

For more information, see Analyzing an Image Loaded from a Local File System
in the Amazon Rekognition Developer Guide.

You pass images stored in an S3 bucket to an Amazon Rekognition API operation
by using the S3Object property. Images stored in an S3 bucket do not need
to be base64-encoded.

The region for the S3 bucket containing the S3 object must match the region
you use for Amazon Rekognition operations.

If you use the AWS CLI to call Amazon Rekognition operations, passing image
bytes using the Bytes property is not supported. You must first upload the
image to an Amazon S3 bucket and then call the operation using the S3Object
property.

For Amazon Rekognition to process an S3 object, the user must have permission
to access the S3 object. For more information, see Resource Based Policies
in the Amazon Rekognition Developer Guide.

type IndexFacesInput struct {
// The ID of an existing collection to which you want to add the faces that// are detected in the input images.//// CollectionId is a required field
CollectionId *string `min:"1" type:"string" required:"true"`
// An array of facial attributes that you want to be returned. This can be the// default list of attributes or all attributes. If you don't specify a value// for Attributes or if you specify ["DEFAULT"], the API returns the following// subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and// Landmarks. If you provide ["ALL"], all facial attributes are returned, but// the operation takes longer to complete.//// If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator// to determine which attributes to return (in this case, all attributes).
DetectionAttributes []*string `type:"list"`
// The ID you want to assign to all the faces detected in the image.
ExternalImageId *string `min:"1" type:"string"`
// The input image as base64-encoded bytes or an S3 object. If you use the AWS// CLI to call Amazon Rekognition operations, passing base64-encoded image bytes// isn't supported.//// Image is a required field
Image *Image `type:"structure" required:"true"`
// The maximum number of faces to index. The value of MaxFaces must be greater// than or equal to 1. IndexFaces returns no more than 100 detected faces in// an image, even if you specify a larger value for MaxFaces.//// If IndexFaces detects more faces than the value of MaxFaces, the faces with// the lowest quality are filtered out first. If there are still more faces// than the value of MaxFaces, the faces with the smallest bounding boxes are// filtered out (up to the number that's needed to satisfy the value of MaxFaces).// Information about the unindexed faces is available in the UnindexedFaces// array.//// The faces that are returned by IndexFaces are sorted by the largest face// bounding box size to the smallest size, in descending order.//// MaxFaces can be used with a collection associated with any version of the// face model.
MaxFaces *int64 `min:"1" type:"integer"`
// A filter that specifies how much filtering is done to identify faces that// are detected with low quality. Filtered faces aren't indexed. If you specify// AUTO, filtering prioritizes the identification of faces that don’t meet the// required quality bar chosen by Amazon Rekognition. The quality bar is based// on a variety of common use cases. Low-quality detections can occur for a// number of reasons. Some examples are an object that's misidentified as a// face, a face that's too blurry, or a face with a pose that's too extreme// to use. If you specify NONE, no filtering is performed. The default value// is AUTO.//// To use quality filtering, the collection you are using must be associated// with version 3 of the face model.
QualityFilter *string `type:"string" enum:"QualityFilter"`
// contains filtered or unexported fields
}

type IndexFacesOutput struct {
// The version number of the face detection model that's associated with the// input collection (CollectionId).
FaceModelVersion *string `type:"string"`
// An array of faces detected and added to the collection. For more information,// see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
FaceRecords []*FaceRecord `type:"list"`
// The orientation of the input image (counterclockwise direction). If your// application displays the image, you can use this value to correct image orientation.// The bounding box coordinates returned in FaceRecords represent face locations// before the image orientation is corrected.//// If the input image is in jpeg format, it might contain exchangeable image// (Exif) metadata. If so, and the Exif metadata populates the orientation field,// the value of OrientationCorrection is null. The bounding box coordinates// in FaceRecords represent face locations after Exif metadata is used to correct// the image orientation. Images in .png format don't contain Exif metadata.
OrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// An array of faces that were detected in the image but weren't indexed. They// weren't indexed because the quality filter identified them as low quality,// or the MaxFaces request parameter filtered them out. To use the quality filter,// you specify the QualityFilter request parameter.
UnindexedFaces []*UnindexedFace `type:"list"`
// contains filtered or unexported fields
}

The Kinesis data stream Amazon Rekognition to which the analysis results
of a Amazon Rekognition stream processor are streamed. For more information,
see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

Kinesis video stream stream that provides the source streaming video for
a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor
in the Amazon Rekognition Developer Guide.

type Landmark struct {
// Type of landmark.
Type *string `type:"string" enum:"LandmarkType"`
// The x-coordinate from the top left of the landmark expressed as the ratio// of the width of the image. For example, if the image is 700 x 200 and the// x-coordinate of the landmark is at 350 pixels, this value is 0.5.
X *float64 `type:"float"`
// The y-coordinate from the top left of the landmark expressed as the ratio// of the height of the image. For example, if the image is 700 x 200 and the// y-coordinate of the landmark is at 100 pixels, this value is 0.5.
Y *float64 `type:"float"`
// contains filtered or unexported fields
}

type ListCollectionsOutput struct {
// An array of collection IDs.
CollectionIds []*string `type:"list"`
// Version numbers of the face detection models associated with the collections// in the array CollectionIds. For example, the value of FaceModelVersions[2]// is the version number for the face detection model used by the collection// in CollectionId[2].
FaceModelVersions []*string `type:"list"`
// If the result is truncated, the response provides a NextToken that you can// use in the subsequent request to fetch the next set of collection IDs.
NextToken *string `type:"string"`
// contains filtered or unexported fields
}

type ListFacesInput struct {
// ID of the collection from which to list the faces.//// CollectionId is a required field
CollectionId *string `min:"1" type:"string" required:"true"`
// Maximum number of faces to return.
MaxResults *int64 `type:"integer"`
// If the previous response was incomplete (because there is more data to retrieve),// Amazon Rekognition returns a pagination token in the response. You can use// this pagination token to retrieve the next set of faces.
NextToken *string `type:"string"`
// contains filtered or unexported fields
}

type ListFacesOutput struct {
// Version number of the face detection model associated with the input collection// (CollectionId).
FaceModelVersion *string `type:"string"`
// An array of Face objects.
Faces []*Face `type:"list"`
// If the response is truncated, Amazon Rekognition returns this token that// you can use in the subsequent request to retrieve the next set of faces.
NextToken *string `type:"string"`
// contains filtered or unexported fields
}

type ListStreamProcessorsInput struct {
// Maximum number of stream processors you want Amazon Rekognition Video to// return in the response. The default is 1000.
MaxResults *int64 `min:"1" type:"integer"`
// If the previous response was incomplete (because there are more stream processors// to retrieve), Amazon Rekognition Video returns a pagination token in the// response. You can use this pagination token to retrieve the next set of stream// processors.
NextToken *string `type:"string"`
// contains filtered or unexported fields
}

type ListStreamProcessorsOutput struct {
// If the response is truncated, Amazon Rekognition Video returns this token// that you can use in the subsequent request to retrieve the next set of stream// processors.
NextToken *string `type:"string"`
// List of stream processors that you have created.
StreamProcessors []*StreamProcessor `type:"list"`
// contains filtered or unexported fields
}

type ModerationLabel struct {
// Specifies the confidence that Amazon Rekognition has that the label has been// correctly identified.//// If you don't specify the MinConfidence parameter in the call to DetectModerationLabels,// the operation returns labels with a confidence value greater than or equal// to 50 percent.
Confidence *float64 `type:"float"`
// The label name for the type of content detected in the image.
Name *string `type:"string"`
// The name for the parent label. Labels at the top level of the hierarchy have// the parent label "".
ParentName *string `type:"string"`
// contains filtered or unexported fields
}

Provides information about a single type of moderated content found in an
image or video. Each type of moderated content has a label within a hierarchical
taxonomy. For more information, see Detecting Unsafe Content in the Amazon
Rekognition Developer Guide.

type PersonDetection struct {
// Details about a person tracked in a video.
Person *PersonDetail `type:"structure"`
// The time, in milliseconds from the start of the video, that the person was// tracked.
Timestamp *int64 `type:"long"`
// contains filtered or unexported fields
}

Details and tracking information for a single time a person is tracked in
a video. Amazon Rekognition operations that track persons return an array
of PersonDetection objects with elements for each time a person is tracked
in a video.

For more information, see API_GetPersonTracking in the Amazon Rekognition
Developer Guide.

type PersonMatch struct {
// Information about the faces in the input collection that match the face of// a person in the video.
FaceMatches []*FaceMatch `type:"list"`
// Information about the matched person.
Person *PersonDetail `type:"structure"`
// The time, in milliseconds from the beginning of the video, that the person// was matched in the video.
Timestamp *int64 `type:"long"`
// contains filtered or unexported fields
}

Information about a person whose face matches a face(s) in an Amazon Rekognition
collection. Includes information about the faces in the Amazon Rekognition
collection (), information about the person (PersonDetail), and the time
stamp for when the person was detected in a video. An array of PersonMatch
objects is returned by .

type Point struct {
// The value of the X coordinate for a point on a Polygon.
X *float64 `type:"float"`
// The value of the Y coordinate for a point on a Polygon.
Y *float64 `type:"float"`
// contains filtered or unexported fields
}

The X and Y coordinates of a point on an image. The X and Y values returned
are ratios of the overall image size. For example, if the input image is
700x200 and the operation returns X=0.5 and Y=0.25, then the point is at
the (350,50) pixel coordinate on the image.

An array of Point objects, Polygon, is returned by . Polygon represents a
fine-grained polygon around detected text. For more information, see Geometry
in the Amazon Rekognition Developer Guide.

type RecognizeCelebritiesOutput struct {
// Details about each celebrity found in the image. Amazon Rekognition can detect// a maximum of 15 celebrities in an image.
CelebrityFaces []*Celebrity `type:"list"`
// The orientation of the input image (counterclockwise direction). If your// application displays the image, you can use this value to correct the orientation.// The bounding box coordinates returned in CelebrityFaces and UnrecognizedFaces// represent face locations before the image orientation is corrected.//// If the input image is in .jpeg format, it might contain exchangeable image// (Exif) metadata that includes the image's orientation. If so, and the Exif// metadata for the input image populates the orientation field, the value of// OrientationCorrection is null. The CelebrityFaces and UnrecognizedFaces bounding// box coordinates represent face locations after Exif metadata is used to correct// the image orientation. Images in .png format don't contain Exif metadata.
OrientationCorrection *string `type:"string" enum:"OrientationCorrection"`
// Details about each unrecognized face in the image.
UnrecognizedFaces []*ComparedFace `type:"list"`
// contains filtered or unexported fields
}

Compares a face in the source input image with each of the 100 largest faces
detected in the target input image.

If the source image contains multiple faces, the service detects the largest
face and compares it with each face detected in the target image.

You pass the input and target images either as base64-encoded image bytes
or as references to images in an Amazon S3 bucket. If you use the AWS CLI
to call Amazon Rekognition operations, passing image bytes isn't supported.
The image must be formatted as a PNG or JPEG file.

In response, the operation returns an array of face matches ordered by similarity
score in descending order. For each face match, the response provides a bounding
box of the face, facial landmarks, pose details (pitch, role, and yaw), quality
(brightness and sharpness), and confidence value (indicating the level of
confidence that the bounding box contains a face). The response also provides
a similarity score, which indicates how closely the faces match.

By default, only faces with a similarity score of greater than or equal to
80% are returned in the response. You can change this value by specifying
the SimilarityThreshold parameter.

CompareFaces also returns an array of faces that don't match the source image.
For each face, it returns a bounding box, confidence value, landmarks, pose
details, and quality. The response also returns information about the face
in the source image, including the bounding box of the face and confidence
value.

If the image doesn't contain Exif metadata, CompareFaces returns orientation
information for the source and target images. Use these values to display
the images with the correct image orientation.

If no faces are detected in the source or target images, CompareFaces returns
an InvalidParameterException error.

This is a stateless API operation. That is, data returned by this operation
doesn't persist.

For an example, see Comparing Faces in Images in the Amazon Rekognition Developer
Guide.

This operation requires permissions to perform the rekognition:CompareFaces
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

CompareFacesRequest generates a "aws/request.Request" representing the
client's request for the CompareFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See CompareFaces for more information on using the CompareFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

CompareFacesWithContext is the same as CompareFaces with the addition of
the ability to pass a context and additional request options.

See CompareFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Creates a collection in an AWS Region. You can add faces to the collection
using the operation.

For example, you might create collections, one for each of your application
users. A user can then index faces using the IndexFaces operation and persist
results in a specific collection. Then, a user can search the collection
for faces in the user-specific container.

Collection names are case-sensitive.

This operation requires permissions to perform the rekognition:CreateCollection
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

CreateCollectionRequest generates a "aws/request.Request" representing the
client's request for the CreateCollection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See CreateCollection for more information on using the CreateCollection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

CreateCollectionWithContext is the same as CreateCollection with the addition of
the ability to pass a context and additional request options.

See CreateCollection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

You provide as input a Kinesis video stream (Input) and a Kinesis data stream
(Output) stream. You also specify the face recognition criteria in Settings.
For example, the collection containing faces that you want to recognize.
Use Name to assign an identifier for the stream processor. You use Name to
manage the stream processor. For example, you can start processing the source
video by calling with the Name field.

After you have finished analyzing a streaming video, use to stop processing.
You can delete the stream processor by calling .

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

CreateStreamProcessorRequest generates a "aws/request.Request" representing the
client's request for the CreateStreamProcessor operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See CreateStreamProcessor for more information on using the CreateStreamProcessor
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

CreateStreamProcessorWithContext is the same as CreateStreamProcessor with the addition of
the ability to pass a context and additional request options.

See CreateStreamProcessor for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

DeleteCollectionRequest generates a "aws/request.Request" representing the
client's request for the DeleteCollection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DeleteCollection for more information on using the DeleteCollection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DeleteCollectionWithContext is the same as DeleteCollection with the addition of
the ability to pass a context and additional request options.

See DeleteCollection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

DeleteFacesRequest generates a "aws/request.Request" representing the
client's request for the DeleteFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DeleteFaces for more information on using the DeleteFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DeleteFacesWithContext is the same as DeleteFaces with the addition of
the ability to pass a context and additional request options.

See DeleteFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Deletes the stream processor identified by Name. You assign the value for
Name when you create the stream processor with . You might not be able to
use the same name for a stream processor for a few seconds after calling
DeleteStreamProcessor.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DeleteStreamProcessorRequest generates a "aws/request.Request" representing the
client's request for the DeleteStreamProcessor operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DeleteStreamProcessor for more information on using the DeleteStreamProcessor
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DeleteStreamProcessorWithContext is the same as DeleteStreamProcessor with the addition of
the ability to pass a context and additional request options.

See DeleteStreamProcessor for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Describes the specified collection. You can use DescribeCollection to get
information, such as the number of faces indexed into a collection and the
version of the model used by the collection for face detection.

For more information, see Describing a Collection in the Amazon Rekognition
Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DescribeCollectionRequest generates a "aws/request.Request" representing the
client's request for the DescribeCollection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DescribeCollection for more information on using the DescribeCollection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DescribeCollectionWithContext is the same as DescribeCollection with the addition of
the ability to pass a context and additional request options.

See DescribeCollection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Provides information about a stream processor created by . You can get information
about the input and output streams, the input parameters for the face recognition
being performed, and the current status of the stream processor.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DescribeStreamProcessorRequest generates a "aws/request.Request" representing the
client's request for the DescribeStreamProcessor operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DescribeStreamProcessor for more information on using the DescribeStreamProcessor
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DescribeStreamProcessorWithContext is the same as DescribeStreamProcessor with the addition of
the ability to pass a context and additional request options.

See DescribeStreamProcessor for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

DetectFaces detects the 100 largest faces in the image. For each face detected,
the operation returns face details. These details include a bounding box
of the face, a confidence value (that the bounding box contains a face),
and a fixed set of attributes such as facial landmarks (for example, coordinates
of eye and mouth), gender, presence of beard, sunglasses, and so on.

The face-detection algorithm is most effective on frontal faces. For non-frontal
or obscured faces, the algorithm might not detect the faces or might detect
faces with lower confidence.

You pass the input image either as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes is not supported. The image must
be either a PNG or JPEG formatted file.

This is a stateless API operation. That is, the operation does not persist
any data.

This operation requires permissions to perform the rekognition:DetectFaces
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DetectFacesRequest generates a "aws/request.Request" representing the
client's request for the DetectFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DetectFaces for more information on using the DetectFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DetectFacesWithContext is the same as DetectFaces with the addition of
the ability to pass a context and additional request options.

See DetectFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Detects instances of real-world entities within an image (JPEG or PNG) provided
as input. This includes objects like flower, tree, and table; events like
wedding, graduation, and birthday party; and concepts like landscape, evening,
and nature.

For an example, see Analyzing Images Stored in an Amazon S3 Bucket in the
Amazon Rekognition Developer Guide.

DetectLabels does not support the detection of activities. However, activity
detection is supported for label detection in videos. For more information,
see StartLabelDetection in the Amazon Rekognition Developer Guide.

You pass the input image as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes is not supported. The image must
be either a PNG or JPEG formatted file.

For each object, scene, and concept the API returns one or more labels. Each
label provides the object name, and the level of confidence that the image
contains the object. For example, suppose the input image has a lighthouse,
the sea, and a rock. The response includes all three labels, one for each
object.

{Name: lighthouse, Confidence: 98.4629}

{Name: rock,Confidence: 79.2097}

{Name: sea,Confidence: 75.061}

In the preceding example, the operation returns one label for each of the
three objects. The operation can also return multiple labels for the same
object in the image. For example, if the input image shows a flower (for
example, a tulip), the operation might return the following three labels.

{Name: flower,Confidence: 99.0562}

{Name: plant,Confidence: 99.0562}

{Name: tulip,Confidence: 99.0562}

In this example, the detection algorithm more precisely identifies the flower
as a tulip.

In response, the API returns an array of labels. In addition, the response
also includes the orientation correction. Optionally, you can specify MinConfidence
to control the confidence threshold for the labels returned. The default
is 50%. You can also add the MaxLabels parameter to limit the number of labels
returned.

If the object detected is a person, the operation doesn't provide the same
facial details that the DetectFaces operation provides.

This is a stateless API operation. That is, the operation does not persist
any data.

This operation requires permissions to perform the rekognition:DetectLabels
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DetectLabelsRequest generates a "aws/request.Request" representing the
client's request for the DetectLabels operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DetectLabels for more information on using the DetectLabels
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DetectLabelsWithContext is the same as DetectLabels with the addition of
the ability to pass a context and additional request options.

See DetectLabels for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Detects explicit or suggestive adult content in a specified JPEG or PNG format
image. Use DetectModerationLabels to moderate images depending on your requirements.
For example, you might want to filter images that contain nudity, but not
images containing suggestive content.

To filter images, use the labels returned by DetectModerationLabels to determine
which types of content are appropriate.

For information about moderation labels, see Detecting Unsafe Content in
the Amazon Rekognition Developer Guide.

You pass the input image either as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes is not supported. The image must
be either a PNG or JPEG formatted file.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DetectModerationLabelsRequest generates a "aws/request.Request" representing the
client's request for the DetectModerationLabels operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DetectModerationLabels for more information on using the DetectModerationLabels
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DetectModerationLabelsWithContext is the same as DetectModerationLabels with the addition of
the ability to pass a context and additional request options.

See DetectModerationLabels for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Detects text in the input image and converts it into machine-readable text.

Pass the input image as base64-encoded image bytes or as a reference to an
image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition
operations, you must pass it as a reference to an image in an Amazon S3 bucket.
For the AWS CLI, passing image bytes is not supported. The image must be
either a .png or .jpeg formatted file.

The DetectText operation returns text in an array of elements, TextDetections.
Each TextDetection element provides information about a single word or line
of text that was detected in the image.

A word is one or more ISO basic latin script characters that are not separated
by spaces. DetectText can detect up to 50 words in an image.

A line is a string of equally spaced words. A line isn't necessarily a complete
sentence. For example, a driver's license number is detected as a line. A
line ends when there is no aligned text after it. Also, a line ends when
there is a large gap between words, relative to the length of the words.
This means, depending on the gap between words, Amazon Rekognition may detect
multiple lines in text aligned in the same direction. Periods don't represent
the end of a line. If a sentence spans multiple lines, the DetectText operation
returns multiple lines.

To determine whether a TextDetection element is a line of text or a word,
use the TextDetection object Type field.

To be detected, text must be within +/- 90 degrees orientation of the horizontal
axis.

For more information, see DetectText in the Amazon Rekognition Developer
Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

DetectTextRequest generates a "aws/request.Request" representing the
client's request for the DetectText operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See DetectText for more information on using the DetectText
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

DetectTextWithContext is the same as DetectText with the addition of
the ability to pass a context and additional request options.

See DetectText for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the name and additional information about a celebrity based on his or
her Amazon Rekognition ID. The additional information is returned as an array
of URLs. If there is no additional information about the celebrity, this
list is empty.

For more information, see Recognizing Celebrities in an Image in the Amazon
Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:GetCelebrityInfo
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetCelebrityInfoRequest generates a "aws/request.Request" representing the
client's request for the GetCelebrityInfo operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetCelebrityInfo for more information on using the GetCelebrityInfo
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetCelebrityInfoWithContext is the same as GetCelebrityInfo with the addition of
the ability to pass a context and additional request options.

See GetCelebrityInfo for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the celebrity recognition results for a Amazon Rekognition Video analysis
started by .

Celebrity recognition in a video is an asynchronous operation. Analysis is
started by a call to which returns a job identifier (JobId). When the celebrity
recognition operation finishes, Amazon Rekognition Video publishes a completion
status to the Amazon Simple Notification Service topic registered in the
initial call to StartCelebrityRecognition. To get the results of the celebrity
recognition analysis, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call GetCelebrityDetection and pass
the job identifier (JobId) from the initial call to StartCelebrityDetection.

For more information, see Working With Stored Videos in the Amazon Rekognition
Developer Guide.

GetCelebrityRecognition returns detected celebrities and the time(s) they
are detected in an array (Celebrities) of objects. Each CelebrityRecognition
contains information about the celebrity in a object and the time, Timestamp,
the celebrity was detected.

GetCelebrityRecognition only returns the default facial attributes (BoundingBox,
Confidence, Landmarks, Pose, and Quality). The other facial attributes listed
in the Face object of the following response syntax are not returned. For
more information, see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Celebrities array is sorted by time (milliseconds from the
start of the video). You can also sort the array by celebrity by specifying
the value ID in the SortBy input parameter.

The CelebrityDetail object includes the celebrity identifer and additional
information urls. If you don't store the additional information urls, you
can get them later by calling with the celebrity identifer.

No information is returned for faces not recognized as celebrities.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of NextToken in
the operation response contains a pagination token for getting the next set
of results. To get the next page of results, call GetCelebrityDetection and
populate the NextToken request parameter with the token value returned from
the previous call to GetCelebrityRecognition.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetCelebrityRecognitionPages iterates over the pages of a GetCelebrityRecognition operation,
calling the "fn" function with the response data for each page. To stop
iterating, return false from the fn function.

See GetCelebrityRecognition method for more information on how to use this operation.

GetCelebrityRecognitionPagesWithContext same as GetCelebrityRecognitionPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetCelebrityRecognitionRequest generates a "aws/request.Request" representing the
client's request for the GetCelebrityRecognition operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetCelebrityRecognition for more information on using the GetCelebrityRecognition
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetCelebrityRecognitionWithContext is the same as GetCelebrityRecognition with the addition of
the ability to pass a context and additional request options.

See GetCelebrityRecognition for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the content moderation analysis results for a Amazon Rekognition Video
analysis started by .

Content moderation analysis of a video is an asynchronous operation. You
start analysis by calling . which returns a job identifier (JobId). When
analysis finishes, Amazon Rekognition Video publishes a completion status
to the Amazon Simple Notification Service topic registered in the initial
call to StartContentModeration. To get the results of the content moderation
analysis, first check that the status value published to the Amazon SNS topic
is SUCCEEDED. If so, call GetCelebrityDetection and pass the job identifier
(JobId) from the initial call to StartCelebrityDetection.

For more information, see Working with Stored Videos in the Amazon Rekognition
Devlopers Guide.

GetContentModeration returns detected content moderation labels, and the
time they are detected, in an array, ModerationLabels, of objects.

By default, the moderated labels are returned sorted by time, in milliseconds
from the start of the video. You can also sort them by moderated label by
specifying NAME for the SortBy input parameter.

Since video analysis can return a large number of results, use the MaxResults
parameter to limit the number of labels returned in a single call to GetContentModeration.
If there are more results than specified in MaxResults, the value of NextToken
in the operation response contains a pagination token for getting the next
set of results. To get the next page of results, call GetContentModeration
and populate the NextToken request parameter with the value of NextToken
returned from the previous call to GetContentModeration.

For more information, see Detecting Unsafe Content in the Amazon Rekognition
Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetContentModerationPages iterates over the pages of a GetContentModeration operation,
calling the "fn" function with the response data for each page. To stop
iterating, return false from the fn function.

See GetContentModeration method for more information on how to use this operation.

GetContentModerationPagesWithContext same as GetContentModerationPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetContentModerationRequest generates a "aws/request.Request" representing the
client's request for the GetContentModeration operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetContentModeration for more information on using the GetContentModeration
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetContentModerationWithContext is the same as GetContentModeration with the addition of
the ability to pass a context and additional request options.

See GetContentModeration for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets face detection results for a Amazon Rekognition Video analysis started
by .

Face detection with Amazon Rekognition Video is an asynchronous operation.
You start face detection by calling which returns a job identifier (JobId).
When the face detection operation finishes, Amazon Rekognition Video publishes
a completion status to the Amazon Simple Notification Service topic registered
in the initial call to StartFaceDetection. To get the results of the face
detection operation, first check that the status value published to the Amazon
SNS topic is SUCCEEDED. If so, call and pass the job identifier (JobId) from
the initial call to StartFaceDetection.

GetFaceDetection returns an array of detected faces (Faces) sorted by the
time the faces were detected.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of NextToken in
the operation response contains a pagination token for getting the next set
of results. To get the next page of results, call GetFaceDetection and populate
the NextToken request parameter with the token value returned from the previous
call to GetFaceDetection.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetFaceDetectionPagesWithContext same as GetFaceDetectionPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetFaceDetectionRequest generates a "aws/request.Request" representing the
client's request for the GetFaceDetection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetFaceDetection for more information on using the GetFaceDetection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetFaceDetectionWithContext is the same as GetFaceDetection with the addition of
the ability to pass a context and additional request options.

See GetFaceDetection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the face search results for Amazon Rekognition Video face search started
by . The search returns faces in a collection that match the faces of persons
detected in a video. It also includes the time(s) that faces are matched
in the video.

Face search in a video is an asynchronous operation. You start face search
by calling to which returns a job identifier (JobId). When the search operation
finishes, Amazon Rekognition Video publishes a completion status to the Amazon
Simple Notification Service topic registered in the initial call to StartFaceSearch.
To get the search results, first check that the status value published to
the Amazon SNS topic is SUCCEEDED. If so, call GetFaceSearch and pass the
job identifier (JobId) from the initial call to StartFaceSearch.

For more information, see Searching Faces in a Collection in the Amazon Rekognition
Developer Guide.

The search results are retured in an array, Persons, of objects. EachPersonMatch
element contains details about the matching faces in the input collection,
person information (facial attributes, bounding boxes, and person identifer)
for the matched person, and the time the person was matched in the video.

GetFaceSearch only returns the default facial attributes (BoundingBox, Confidence,
Landmarks, Pose, and Quality). The other facial attributes listed in the
Face object of the following response syntax are not returned. For more information,
see FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Persons array is sorted by the time, in milliseconds from
the start of the video, persons are matched. You can also sort by persons
by specifying INDEX for the SORTBY input parameter.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetFaceSearchPagesWithContext same as GetFaceSearchPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetFaceSearchRequest generates a "aws/request.Request" representing the
client's request for the GetFaceSearch operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetFaceSearch for more information on using the GetFaceSearch
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetFaceSearchWithContext is the same as GetFaceSearch with the addition of
the ability to pass a context and additional request options.

See GetFaceSearch for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the label detection results of a Amazon Rekognition Video analysis started
by .

The label detection operation is started by a call to which returns a job
identifier (JobId). When the label detection operation finishes, Amazon Rekognition
publishes a completion status to the Amazon Simple Notification Service topic
registered in the initial call to StartlabelDetection. To get the results
of the label detection operation, first check that the status value published
to the Amazon SNS topic is SUCCEEDED. If so, call and pass the job identifier
(JobId) from the initial call to StartLabelDetection.

GetLabelDetection returns an array of detected labels (Labels) sorted by
the time the labels were detected. You can also sort by the label name by
specifying NAME for the SortBy input parameter.

The labels returned include the label name, the percentage confidence in
the accuracy of the detected label, and the time the label was detected in
the video.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of NextToken in
the operation response contains a pagination token for getting the next set
of results. To get the next page of results, call GetlabelDetection and populate
the NextToken request parameter with the token value returned from the previous
call to GetLabelDetection.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetLabelDetectionPagesWithContext same as GetLabelDetectionPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetLabelDetectionRequest generates a "aws/request.Request" representing the
client's request for the GetLabelDetection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetLabelDetection for more information on using the GetLabelDetection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetLabelDetectionWithContext is the same as GetLabelDetection with the addition of
the ability to pass a context and additional request options.

See GetLabelDetection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Gets the person tracking results of a Amazon Rekognition Video analysis started
by .

The person detection operation is started by a call to StartPersonTracking
which returns a job identifier (JobId). When the person detection operation
finishes, Amazon Rekognition Video publishes a completion status to the Amazon
Simple Notification Service topic registered in the initial call to StartPersonTracking.

To get the results of the person tracking operation, first check that the
status value published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to StartPersonTracking.

GetPersonTracking returns an array, Persons, of tracked persons and the time(s)
they were tracked in the video.

GetPersonTracking only returns the default facial attributes (BoundingBox,
Confidence, Landmarks, Pose, and Quality). The other facial attributes listed
in the Face object of the following response syntax are not returned.

For more information, see FaceDetail in the Amazon Rekognition Developer
Guide.

By default, the array is sorted by the time(s) a person is tracked in the
video. You can sort by tracked persons by specifying INDEX for the SortBy
input parameter.

Use the MaxResults parameter to limit the number of items returned. If there
are more results than specified in MaxResults, the value of NextToken in
the operation response contains a pagination token for getting the next set
of results. To get the next page of results, call GetPersonTracking and populate
the NextToken request parameter with the token value returned from the previous
call to GetPersonTracking.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

GetPersonTrackingPagesWithContext same as GetPersonTrackingPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

GetPersonTrackingRequest generates a "aws/request.Request" representing the
client's request for the GetPersonTracking operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See GetPersonTracking for more information on using the GetPersonTracking
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

GetPersonTrackingWithContext is the same as GetPersonTracking with the addition of
the ability to pass a context and additional request options.

See GetPersonTracking for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Detects faces in the input image and adds them to the specified collection.

Amazon Rekognition doesn't save the actual faces that are detected. Instead,
the underlying detection algorithm first detects the faces in the input image.
For each face, the algorithm extracts facial features into a feature vector,
and stores it in the backend database. Amazon Rekognition uses feature vectors
when it performs face match and search operations using the and operations.

For more information, see Adding Faces to a Collection in the Amazon Rekognition
Developer Guide.

To get the number of faces in a collection, call .

If you're using version 1.0 of the face detection model, IndexFaces indexes
the 15 largest faces in the input image. Later versions of the face detection
model index the 100 largest faces in the input image. To determine which
version of the model you're using, call and supply the collection ID. You
can also get the model version from the value of FaceModelVersion in the
response from IndexFaces.

For more information, see Model Versioning in the Amazon Rekognition Developer
Guide.

If you provide the optional ExternalImageID for the input image you provided,
Amazon Rekognition associates this ID with all faces that it detects. When
you call the operation, the response returns the external ID. You can use
this external image ID to create a client-side index to associate the faces
with each image. You can then use the index to find all faces in an image.

You can specify the maximum number of faces to index with the MaxFaces input
parameter. This is useful when you want to index the largest faces in an
image and don't want to index smaller faces, such as those belonging to people
standing in the background.

The QualityFilter input parameter allows you to filter out detected faces
that don’t meet the required quality bar chosen by Amazon Rekognition. The
quality bar is based on a variety of common use cases. By default, IndexFaces
filters detected faces. You can also explicitly filter detected faces by
specifying AUTO for the value of QualityFilter. If you do not want to filter
detected faces, specify NONE.

To use quality filtering, you need a collection associated with version 3
of the face model. To get the version of the face model associated with a
collection, call .

Information about faces detected in an image, but not indexed, is returned
in an array of objects, UnindexedFaces. Faces aren't indexed for reasons
such as:

* The number of faces detected exceeds the value of the MaxFaces request
parameter.
* The face is too small compared to the image dimensions.
* The face is too blurry.
* The image is too dark.
* The face has an extreme pose.

In response, the IndexFaces operation returns an array of metadata for all
detected faces, FaceRecords. This includes:

* The bounding box, BoundingBox, of the detected face.
* A confidence value, Confidence, which indicates the confidence that
the bounding box contains a face.
* A face ID, faceId, assigned by the service for each face that's detected
and stored.
* An image ID, ImageId, assigned by the service for the input image.

If you request all facial attributes (by using the detectionAttributes parameter),
Amazon Rekognition returns detailed facial attributes, such as facial landmarks
(for example, location of eye and mouth) and other facial attributes like
gender. If you provide the same image, specify the same collection, and use
the same external ID in the IndexFaces operation, Amazon Rekognition doesn't
save duplicate face metadata.

The input image is passed either as base64-encoded image bytes, or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes isn't supported. The image must
be formatted as a PNG or JPEG file.

This operation requires permissions to perform the rekognition:IndexFaces

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

IndexFacesRequest generates a "aws/request.Request" representing the
client's request for the IndexFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See IndexFaces for more information on using the IndexFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

IndexFacesWithContext is the same as IndexFaces with the addition of
the ability to pass a context and additional request options.

See IndexFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

ListCollectionsPagesWithContext same as ListCollectionsPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

ListCollectionsRequest generates a "aws/request.Request" representing the
client's request for the ListCollections operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See ListCollections for more information on using the ListCollections
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

ListCollectionsWithContext is the same as ListCollections with the addition of
the ability to pass a context and additional request options.

See ListCollections for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Returns metadata for faces in the specified collection. This metadata includes
information such as the bounding box coordinates, the confidence (that the
bounding box contains a face), and face ID. For an example, see Listing Faces
in a Collection in the Amazon Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:ListFaces
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

ListFacesPagesWithContext same as ListFacesPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

ListFacesRequest generates a "aws/request.Request" representing the
client's request for the ListFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See ListFaces for more information on using the ListFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

ListFacesWithContext is the same as ListFaces with the addition of
the ability to pass a context and additional request options.

See ListFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

ListStreamProcessorsPages iterates over the pages of a ListStreamProcessors operation,
calling the "fn" function with the response data for each page. To stop
iterating, return false from the fn function.

See ListStreamProcessors method for more information on how to use this operation.

ListStreamProcessorsPagesWithContext same as ListStreamProcessorsPages except
it takes a Context and allows setting request options on the pages.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

ListStreamProcessorsRequest generates a "aws/request.Request" representing the
client's request for the ListStreamProcessors operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See ListStreamProcessors for more information on using the ListStreamProcessors
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

ListStreamProcessorsWithContext is the same as ListStreamProcessors with the addition of
the ability to pass a context and additional request options.

See ListStreamProcessors for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Returns an array of celebrities recognized in the input image. For more information,
see Recognizing Celebrities in the Amazon Rekognition Developer Guide.

RecognizeCelebrities returns the 100 largest faces in the image. It lists
recognized celebrities in the CelebrityFaces array and unrecognized faces
in the UnrecognizedFaces array. RecognizeCelebrities doesn't return celebrities
whose faces aren't among the largest 100 faces in the image.

For each celebrity recognized, RecognizeCelebrities returns a Celebrity object.
The Celebrity object contains the celebrity name, ID, URL links to additional
information, match confidence, and a ComparedFace object that you can use
to locate the celebrity's face on the image.

Amazon Rekognition doesn't retain information about which images a celebrity
has been recognized in. Your application must store this information and
use the Celebrity ID property as a unique identifier for the celebrity. If
you don't store the celebrity name or additional information URLs returned
by RecognizeCelebrities, you will need the ID to identify the celebrity in
a call to the operation.

You pass the input image either as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes is not supported. The image must
be either a PNG or JPEG formatted file.

For an example, see Recognizing Celebrities in an Image in the Amazon Rekognition
Developer Guide.

This operation requires permissions to perform the rekognition:RecognizeCelebrities
operation.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

RecognizeCelebritiesRequest generates a "aws/request.Request" representing the
client's request for the RecognizeCelebrities operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See RecognizeCelebrities for more information on using the RecognizeCelebrities
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

RecognizeCelebritiesWithContext is the same as RecognizeCelebrities with the addition of
the ability to pass a context and additional request options.

See RecognizeCelebrities for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

For a given input face ID, searches for matching faces in the collection
the face belongs to. You get a face ID when you add a face to the collection
using the IndexFaces operation. The operation compares the features of the
input face with faces in the specified collection.

You can also search faces without indexing faces by using the SearchFacesByImage
operation.

The operation response returns an array of faces that match, ordered by similarity
score with the highest similarity first. More specifically, it is an array
of metadata for each face match that is found. Along with the metadata, the
response also includes a confidence value for each face match, indicating
the confidence that the specific face matches the input face.

For an example, see Searching for a Face Using Its Face ID in the Amazon
Rekognition Developer Guide.

This operation requires permissions to perform the rekognition:SearchFaces
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

For a given input image, first detects the largest face in the image, and
then searches the specified collection for matching faces. The operation
compares the features of the input face with faces in the specified collection.

To search for all faces in an input image, you might first call the operation,
and then use the face IDs returned in subsequent calls to the operation.

You can also call the DetectFaces operation and use the bounding boxes in

the response to make face crops, which then you can pass in to the SearchFacesByImage
operation.

You pass the input image either as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, passing image bytes is not supported. The image must
be either a PNG or JPEG formatted file.

The response returns an array of faces that match, ordered by similarity
score with the highest similarity first. More specifically, it is an array
of metadata for each face match found. Along with the metadata, the response
also includes a similarity indicating how similar the face is to the input
face. In the response, the operation also returns the bounding box (and a
confidence level that the bounding box contains a face) of the face that
Amazon Rekognition used for the input image.

For an example, Searching for a Face Using an Image in the Amazon Rekognition
Developer Guide.

This operation requires permissions to perform the rekognition:SearchFacesByImage
action.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

SearchFacesByImageRequest generates a "aws/request.Request" representing the
client's request for the SearchFacesByImage operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See SearchFacesByImage for more information on using the SearchFacesByImage
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

SearchFacesByImageWithContext is the same as SearchFacesByImage with the addition of
the ability to pass a context and additional request options.

See SearchFacesByImage for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

SearchFacesRequest generates a "aws/request.Request" representing the
client's request for the SearchFaces operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See SearchFaces for more information on using the SearchFaces
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

SearchFacesWithContext is the same as SearchFaces with the addition of
the ability to pass a context and additional request options.

See SearchFaces for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Amazon Rekognition Video can detect celebrities in a video must be stored
in an Amazon S3 bucket. Use Video to specify the bucket name and the filename
of the video. StartCelebrityRecognition returns a job identifier (JobId)
which you use to get the results of the analysis. When celebrity recognition
analysis is finished, Amazon Rekognition Video publishes a completion status
to the Amazon Simple Notification Service topic that you specify in NotificationChannel.
To get the results of the celebrity recognition analysis, first check that
the status value published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to StartCelebrityRecognition.

For more information, see Recognizing Celebrities in the Amazon Rekognition
Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartCelebrityRecognitionRequest generates a "aws/request.Request" representing the
client's request for the StartCelebrityRecognition operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartCelebrityRecognition for more information on using the StartCelebrityRecognition
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartCelebrityRecognitionWithContext is the same as StartCelebrityRecognition with the addition of
the ability to pass a context and additional request options.

See StartCelebrityRecognition for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Starts asynchronous detection of explicit or suggestive adult content in
a stored video.

Amazon Rekognition Video can moderate content in a video stored in an Amazon
S3 bucket. Use Video to specify the bucket name and the filename of the video.
StartContentModeration returns a job identifier (JobId) which you use to
get the results of the analysis. When content moderation analysis is finished,
Amazon Rekognition Video publishes a completion status to the Amazon Simple
Notification Service topic that you specify in NotificationChannel.

To get the results of the content moderation analysis, first check that the
status value published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to StartContentModeration.

For more information, see Detecting Unsafe Content in the Amazon Rekognition
Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartContentModerationRequest generates a "aws/request.Request" representing the
client's request for the StartContentModeration operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartContentModeration for more information on using the StartContentModeration
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartContentModerationWithContext is the same as StartContentModeration with the addition of
the ability to pass a context and additional request options.

See StartContentModeration for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Amazon Rekognition Video can detect faces in a video stored in an Amazon
S3 bucket. Use Video to specify the bucket name and the filename of the video.
StartFaceDetection returns a job identifier (JobId) that you use to get the
results of the operation. When face detection is finished, Amazon Rekognition
Video publishes a completion status to the Amazon Simple Notification Service
topic that you specify in NotificationChannel. To get the results of the
label detection operation, first check that the status value published to
the Amazon SNS topic is SUCCEEDED. If so, call and pass the job identifier
(JobId) from the initial call to StartFaceDetection.

For more information, see Detecting Faces in a Stored Video in the Amazon
Rekognition Developer Guide.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartFaceDetectionRequest generates a "aws/request.Request" representing the
client's request for the StartFaceDetection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartFaceDetection for more information on using the StartFaceDetection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartFaceDetectionWithContext is the same as StartFaceDetection with the addition of
the ability to pass a context and additional request options.

See StartFaceDetection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Starts the asynchronous search for faces in a collection that match the faces
of persons detected in a stored video.

The video must be stored in an Amazon S3 bucket. Use Video to specify the
bucket name and the filename of the video. StartFaceSearch returns a job
identifier (JobId) which you use to get the search results once the search
has completed. When searching is finished, Amazon Rekognition Video publishes
a completion status to the Amazon Simple Notification Service topic that
you specify in NotificationChannel. To get the search results, first check
that the status value published to the Amazon SNS topic is SUCCEEDED. If
so, call and pass the job identifier (JobId) from the initial call to StartFaceSearch.
For more information, see procedure-person-search-videos.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeResourceNotFoundException "ResourceNotFoundException"
The collection specified in the request cannot be found.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartFaceSearchRequest generates a "aws/request.Request" representing the
client's request for the StartFaceSearch operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartFaceSearch for more information on using the StartFaceSearch
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartFaceSearchWithContext is the same as StartFaceSearch with the addition of
the ability to pass a context and additional request options.

See StartFaceSearch for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Amazon Rekognition Video can detect labels in a video. Labels are instances
of real-world entities. This includes objects like flower, tree, and table;
events like wedding, graduation, and birthday party; concepts like landscape,
evening, and nature; and activities like a person getting out of a car or
a person skiing.

The video must be stored in an Amazon S3 bucket. Use Video to specify the
bucket name and the filename of the video. StartLabelDetection returns a
job identifier (JobId) which you use to get the results of the operation.
When label detection is finished, Amazon Rekognition Video publishes a completion
status to the Amazon Simple Notification Service topic that you specify in
NotificationChannel.

To get the results of the label detection operation, first check that the
status value published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to StartLabelDetection.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartLabelDetectionRequest generates a "aws/request.Request" representing the
client's request for the StartLabelDetection operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartLabelDetection for more information on using the StartLabelDetection
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartLabelDetectionWithContext is the same as StartLabelDetection with the addition of
the ability to pass a context and additional request options.

See StartLabelDetection for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Amazon Rekognition Video can track persons in a video stored in an Amazon
S3 bucket. Use Video to specify the bucket name and the filename of the video.
StartPersonTracking returns a job identifier (JobId) which you use to get
the results of the operation. When label detection is finished, Amazon Rekognition
publishes a completion status to the Amazon Simple Notification Service topic
that you specify in NotificationChannel.

To get the results of the person detection operation, first check that the
status value published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to StartPersonTracking.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

* ErrCodeAccessDeniedException "AccessDeniedException"
You are not authorized to perform the action.
* ErrCodeIdempotentParameterMismatchException "IdempotentParameterMismatchException"
A ClientRequestToken input parameter was reused with an operation, but at
least one of the other input parameters is different from the previous call
to the operation.
* ErrCodeInvalidParameterException "InvalidParameterException"
Input parameter violated a constraint. Validate your parameter before calling
the API operation again.
* ErrCodeInvalidS3ObjectException "InvalidS3ObjectException"
Amazon Rekognition is unable to access the S3 object specified in the request.
* ErrCodeInternalServerError "InternalServerError"
Amazon Rekognition experienced a service issue. Try your call again.
* ErrCodeVideoTooLargeException "VideoTooLargeException"
The file size or duration of the supplied media is too large. The maximum
file size is 8GB. The maximum duration is 2 hours.
* ErrCodeProvisionedThroughputExceededException "ProvisionedThroughputExceededException"
The number of requests exceeded your throughput limit. If you want to increase
this limit, contact Amazon Rekognition.
* ErrCodeLimitExceededException "LimitExceededException"
An Amazon Rekognition service limit was exceeded. For example, if you start
too many Amazon Rekognition Video jobs concurrently, calls to start operations
(StartLabelDetection, for example) will raise a LimitExceededException exception
(HTTP status code: 400) until the number of concurrently running jobs is
below the Amazon Rekognition service limit.
* ErrCodeThrottlingException "ThrottlingException"
Amazon Rekognition is temporarily unable to process the request. Try your
call again.

StartPersonTrackingRequest generates a "aws/request.Request" representing the
client's request for the StartPersonTracking operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartPersonTracking for more information on using the StartPersonTracking
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartPersonTrackingWithContext is the same as StartPersonTracking with the addition of
the ability to pass a context and additional request options.

See StartPersonTracking for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

Starts processing a stream processor. You create a stream processor by calling
. To tell StartStreamProcessor which stream processor to start, use the value
of the Name field specified in the call to CreateStreamProcessor.

Returns awserr.Error for service API and SDK errors. Use runtime type assertions
with awserr.Error's Code and Message methods to get detailed information about
the error.

StartStreamProcessorRequest generates a "aws/request.Request" representing the
client's request for the StartStreamProcessor operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StartStreamProcessor for more information on using the StartStreamProcessor
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StartStreamProcessorWithContext is the same as StartStreamProcessor with the addition of
the ability to pass a context and additional request options.

See StartStreamProcessor for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

StopStreamProcessorRequest generates a "aws/request.Request" representing the
client's request for the StopStreamProcessor operation. The "output" return
value will be populated with the request's response once the request completes
successfully.

Use "Send" method on the returned Request to send the API call to the service.
the "output" return value is not valid until after Send returns without error.

See StopStreamProcessor for more information on using the StopStreamProcessor
API call, and error handling.

This method is useful when you want to inject custom logic or configuration
into the SDK's request lifecycle. Such as custom headers, or retry logic.

StopStreamProcessorWithContext is the same as StopStreamProcessor with the addition of
the ability to pass a context and additional request options.

See StopStreamProcessor for details on how to use this API operation.

The context must be non-nil and will be used for request cancellation. If
the context is nil a panic will occur. In the future the SDK may create
sub-contexts for http.Requests. See https://golang.org/pkg/context/
for more information on using Contexts.

type SearchFacesByImageInput struct {
// ID of the collection to search.//// CollectionId is a required field
CollectionId *string `min:"1" type:"string" required:"true"`
// (Optional) Specifies the minimum confidence in the face match to return.// For example, don't return any matches where confidence in matches is less// than 70%.
FaceMatchThreshold *float64 `type:"float"`
// The input image as base64-encoded bytes or an S3 object. If you use the AWS// CLI to call Amazon Rekognition operations, passing base64-encoded image bytes// is not supported.//// Image is a required field
Image *Image `type:"structure" required:"true"`
// Maximum number of faces to return. The operation returns the maximum number// of faces with the highest confidence in the match.
MaxFaces *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

type SearchFacesByImageOutput struct {
// An array of faces that match the input face, along with the confidence in// the match.
FaceMatches []*FaceMatch `type:"list"`
// Version number of the face detection model associated with the input collection// (CollectionId).
FaceModelVersion *string `type:"string"`
// The bounding box around the face in the input image that Amazon Rekognition// used for the search.
SearchedFaceBoundingBox *BoundingBox `type:"structure"`
// The level of confidence that the searchedFaceBoundingBox, contains a face.
SearchedFaceConfidence *float64 `type:"float"`
// contains filtered or unexported fields
}

type SearchFacesInput struct {
// ID of the collection the face belongs to.//// CollectionId is a required field
CollectionId *string `min:"1" type:"string" required:"true"`
// ID of a face to find matches for in the collection.//// FaceId is a required field
FaceId *string `type:"string" required:"true"`
// Optional value specifying the minimum confidence in the face match to return.// For example, don't return any matches where confidence in matches is less// than 70%.
FaceMatchThreshold *float64 `type:"float"`
// Maximum number of faces to return. The operation returns the maximum number// of faces with the highest confidence in the match.
MaxFaces *int64 `min:"1" type:"integer"`
// contains filtered or unexported fields
}

type SearchFacesOutput struct {
// An array of faces that matched the input face, along with the confidence// in the match.
FaceMatches []*FaceMatch `type:"list"`
// Version number of the face detection model associated with the input collection// (CollectionId).
FaceModelVersion *string `type:"string"`
// ID of the face that was searched for matches in a collection.
SearchedFaceId *string `type:"string"`
// contains filtered or unexported fields
}

type StartCelebrityRecognitionInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartCelebrityRecognition requests, the same JobId is// returned. Use ClientRequestToken to prevent the same job from being accidently// started more than once.
ClientRequestToken *string `min:"1" type:"string"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish// the completion status of the celebrity recognition analysis to.
NotificationChannel *NotificationChannel `type:"structure"`
// The video in which you want to recognize celebrities. The video must be stored// in an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartContentModerationInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartContentModeration requests, the same JobId is returned.// Use ClientRequestToken to prevent the same job from being accidently started// more than once.
ClientRequestToken *string `min:"1" type:"string"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// Specifies the minimum confidence that Amazon Rekognition must have in order// to return a moderated content label. Confidence represents how certain Amazon// Rekognition is that the moderated content is correctly identified. 0 is the// lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn't// return any moderated content labels with a confidence level lower than this// specified value.
MinConfidence *float64 `type:"float"`
// The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish// the completion status of the content moderation analysis to.
NotificationChannel *NotificationChannel `type:"structure"`
// The video in which you want to moderate content. The video must be stored// in an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartFaceDetectionInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartFaceDetection requests, the same JobId is returned.// Use ClientRequestToken to prevent the same job from being accidently started// more than once.
ClientRequestToken *string `min:"1" type:"string"`
// The face attributes you want returned.//// DEFAULT - The following subset of facial attributes are returned: BoundingBox,// Confidence, Pose, Quality and Landmarks.//// ALL - All facial attributes are returned.
FaceAttributes *string `type:"string" enum:"FaceAttributes"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video// to publish the completion status of the face detection operation.
NotificationChannel *NotificationChannel `type:"structure"`
// The video in which you want to detect faces. The video must be stored in// an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartFaceSearchInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartFaceSearch requests, the same JobId is returned.// Use ClientRequestToken to prevent the same job from being accidently started// more than once.
ClientRequestToken *string `min:"1" type:"string"`
// ID of the collection that contains the faces you want to search for.//// CollectionId is a required field
CollectionId *string `min:"1" type:"string" required:"true"`
// The minimum confidence in the person match to return. For example, don't// return any matches where confidence in matches is less than 70%.
FaceMatchThreshold *float64 `type:"float"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video// to publish the completion status of the search.
NotificationChannel *NotificationChannel `type:"structure"`
// The video you want to search. The video must be stored in an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartLabelDetectionInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartLabelDetection requests, the same JobId is returned.// Use ClientRequestToken to prevent the same job from being accidently started// more than once.
ClientRequestToken *string `min:"1" type:"string"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// Specifies the minimum confidence that Amazon Rekognition Video must have// in order to return a detected label. Confidence represents how certain Amazon// Rekognition is that a label is correctly identified.0 is the lowest confidence.// 100 is the highest confidence. Amazon Rekognition Video doesn't return any// labels with a confidence level lower than this specified value.//// If you don't specify MinConfidence, the operation returns labels with confidence// values greater than or equal to 50 percent.
MinConfidence *float64 `type:"float"`
// The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the// completion status of the label detection operation to.
NotificationChannel *NotificationChannel `type:"structure"`
// The video in which you want to detect labels. The video must be stored in// an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartPersonTrackingInput struct {
// Idempotent token used to identify the start request. If you use the same// token with multiple StartPersonTracking requests, the same JobId is returned.// Use ClientRequestToken to prevent the same job from being accidently started// more than once.
ClientRequestToken *string `min:"1" type:"string"`
// Unique identifier you specify to identify the job in the completion status// published to the Amazon Simple Notification Service topic.
JobTag *string `min:"1" type:"string"`
// The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the// completion status of the people detection operation to.
NotificationChannel *NotificationChannel `type:"structure"`
// The video in which you want to detect people. The video must be stored in// an Amazon S3 bucket.//// Video is a required field
Video *Video `type:"structure" required:"true"`
// contains filtered or unexported fields
}

type StartStreamProcessorInput struct {
// The name of the stream processor to start processing.//// Name is a required field
Name *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

type StopStreamProcessorInput struct {
// The name of a stream processor created by .//// Name is a required field
Name *string `min:"1" type:"string" required:"true"`
// contains filtered or unexported fields
}

An object that recognizes faces in a streaming video. An Amazon Rekognition
stream processor is created by a call to . The request parameters for CreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition
parameters, and where to stream the analysis resullts.

Information about the Amazon Kinesis Data Streams stream to which a Amazon
Rekognition Video stream processor streams the results of a video analysis.
For more information, see CreateStreamProcessor in the Amazon Rekognition
Developer Guide.

type TextDetection struct {
// The confidence that Amazon Rekognition has in the accuracy of the detected// text and the accuracy of the geometry points around the detected text.
Confidence *float64 `type:"float"`
// The word or line of text recognized by Amazon Rekognition.
DetectedText *string `type:"string"`
// The location of the detected text on the image. Includes an axis aligned// coarse bounding box surrounding the text and a finer grain polygon for more// accurate spatial information.
Geometry *Geometry `type:"structure"`
// The identifier for the detected text. The identifier is only unique for a// single call to DetectText.
Id *int64 `type:"integer"`
// The Parent identifier for the detected text identified by the value of ID.// If the type of detected text is LINE, the value of ParentId is Null.
ParentId *int64 `type:"integer"`
// The type of text that was detected.
Type *string `type:"string" enum:"TextTypes"`
// contains filtered or unexported fields
}

Information about a word or line of text detected by .

The DetectedText field contains the text that Amazon Rekognition detected
in the image.

Every word and line has an identifier (Id). Each word belongs to a line and
has a parent identifier (ParentId) that identifies the line of text in which
the word appears. The word Id is also an index for the word within a line
of words.

For more information, see Detecting Text in the Amazon Rekognition Developer
Guide.

type UnindexedFace struct {
// The structure that contains attributes of a face that IndexFacesdetected,// but didn't index.
FaceDetail *FaceDetail `type:"structure"`
// An array of reasons that specify why a face wasn't indexed.//// * EXTREME_POSE - The face is at a pose that can't be detected. For example,// the head is turned too far away from the camera.//// * EXCEEDS_MAX_FACES - The number of faces detected is already higher than// that specified by the MaxFaces input parameter for IndexFaces.//// * LOW_BRIGHTNESS - The image is too dark.//// * LOW_SHARPNESS - The image is too blurry.//// * LOW_CONFIDENCE - The face was detected with a low confidence.//// * SMALL_BOUNDING_BOX - The bounding box around the face is too small.
Reasons []*string `type:"list"`
// contains filtered or unexported fields
}

A face that detected, but didn't index. Use the Reasons response attribute
to determine why a face wasn't indexed.