Method Detail

setEndpoint

Overrides the default endpoint for this client
("https://rekognition.us-east-1.amazonaws.com"). Callers can use this
method to control which AWS region they want to work with.

Callers can pass in just the endpoint (ex:
"rekognition.us-east-1.amazonaws.com") or a full URL, including the
protocol (ex: "https://rekognition.us-east-1.amazonaws.com"). If the
protocol is not specified here, the default protocol from this client's
ClientConfiguration will be used, which by default is HTTPS.

This method is not threadsafe. An endpoint should be configured when
the client is created and before any service requests are made. Changing
it afterwards creates inevitable race conditions for any service requests
in transit or retrying.

Parameters:

endpoint - The endpoint (ex: "rekognition.us-east-1.amazonaws.com")
or a full URL, including the protocol (ex:
"https://rekognition.us-east-1.amazonaws.com") of the region
specific AWS endpoint this client will communicate with.

Throws:

java.lang.IllegalArgumentException - If any problems are detected with the
specified endpoint.

setRegion

An alternative to setEndpoint(String), sets the
regional endpoint for this client's service calls. Callers can use this
method to control which AWS region they want to work with.

By default, all service endpoints in all regions use the https protocol.
To use http instead, specify it in the ClientConfiguration
supplied at construction.

This method is not threadsafe. A region should be configured when the
client is created and before any service requests are made. Changing it
afterwards creates inevitable race conditions for any service requests in
transit or retrying.

compareFaces

Compares a face in the source input image with each of the 100
largest faces detected in the target input image.

If the source image contains multiple faces, the service detects the
largest face and compares it with each face detected in the target image.

You pass the input and target images either as base64-encoded image bytes
or as a references to images in an Amazon S3 bucket. If you use the
Amazon CLI to call Amazon Rekognition operations, passing image bytes is
not supported. The image must be either a PNG or JPEG formatted file.

In response, the operation returns an array of face matches ordered by
similarity score in descending order. For each face match, the response
provides a bounding box of the face, facial landmarks, pose details
(pitch, role, and yaw), quality (brightness and sharpness), and
confidence value (indicating the level of confidence that the bounding
box contains a face). The response also provides a similarity score,
which indicates how closely the faces match.

By default, only faces with a similarity score of greater than or equal
to 80% are returned in the response. You can change this value by
specifying the SimilarityThreshold parameter.

CompareFaces also returns an array of faces that don't match
the source image. For each face, it returns a bounding box, confidence
value, landmarks, pose details, and quality. The response also returns
information about the face in the source image, including the bounding
box of the face and confidence value.

If the image doesn't contain Exif metadata, CompareFaces
returns orientation information for the source and target images. Use
these values to display the images with the correct image orientation.

If no faces are detected in the source or target images,
CompareFaces returns an
InvalidParameterException error.

This is a stateless API operation. That is, data returned by this
operation doesn't persist.

For an example, see Comparing Faces in Images in the Amazon Rekognition
Developer Guide.

This operation requires permissions to perform the
rekognition:CompareFaces action.

Parameters:

compareFacesRequest -

Returns:

compareFacesResult The response from the CompareFaces service
method, as returned by Amazon Rekognition.

createCollection

Creates a collection in an AWS Region. You can add faces to the
collection using the operation.

For example, you might create collections, one for each of your
application users. A user can then index faces using the
IndexFaces operation and persist results in a specific
collection. Then, a user can search the collection for faces in the
user-specific container.

Collection names are case-sensitive.

This operation requires permissions to perform the
rekognition:CreateCollection action.

Parameters:

createCollectionRequest -

Returns:

createCollectionResult The response from the CreateCollection
service method, as returned by Amazon Rekognition.

You provide as input a Kinesis video stream (Input) and a
Kinesis data stream (Output) stream. You also specify the
face recognition criteria in Settings. For example, the
collection containing faces that you want to recognize. Use
Name to assign an identifier for the stream processor. You
use Name to manage the stream processor. For example, you
can start processing the source video by calling with the
Name field.

After you have finished analyzing a streaming video, use to stop
processing. You can delete the stream processor by calling .

Parameters:

createStreamProcessorRequest -

Returns:

createStreamProcessorResult The response from the
CreateStreamProcessor service method, as returned by Amazon
Rekognition.

deleteStreamProcessor

Deletes the stream processor identified by Name. You assign
the value for Name when you create the stream processor with
. You might not be able to use the same name for a stream processor for a
few seconds after calling DeleteStreamProcessor.

Parameters:

deleteStreamProcessorRequest -

Returns:

deleteStreamProcessorResult The response from the
DeleteStreamProcessor service method, as returned by Amazon
Rekognition.

describeStreamProcessor

Provides information about a stream processor created by . You can get
information about the input and output streams, the input parameters for
the face recognition being performed, and the current status of the
stream processor.

Parameters:

describeStreamProcessorRequest -

Returns:

describeStreamProcessorResult The response from the
DescribeStreamProcessor service method, as returned by Amazon
Rekognition.

detectFaces

DetectFaces detects the 100 largest faces in the image. For
each face detected, the operation returns face details including a
bounding box of the face, a confidence value (that the bounding box
contains a face), and a fixed set of attributes such as facial landmarks
(for example, coordinates of eye and mouth), gender, presence of beard,
sunglasses, etc.

The face-detection algorithm is most effective on frontal faces. For
non-frontal or obscured faces, the algorithm may not detect the faces or
might detect faces with lower confidence.

You pass the input image either as base64-encoded image bytes or as a
reference to an image in an Amazon S3 bucket. If you use the Amazon CLI
to call Amazon Rekognition operations, passing image bytes is not
supported. The image must be either a PNG or JPEG formatted file.

This is a stateless API operation. That is, the operation does not
persist any data.

This operation requires permissions to perform the
rekognition:DetectFaces action.

Parameters:

detectFacesRequest -

Returns:

detectFacesResult The response from the DetectFaces service
method, as returned by Amazon Rekognition.

detectLabels

Detects instances of real-world entities within an image (JPEG or PNG)
provided as input. This includes objects like flower, tree, and table;
events like wedding, graduation, and birthday party; and concepts like
landscape, evening, and nature.

For an example, see Analyzing Images Stored in an Amazon S3 Bucket in the
Amazon Rekognition Developer Guide.

DetectLabels does not support the detection of activities.
However, activity detection is supported for label detection in videos.
For more information, see StartLabelDetection in the Amazon Rekognition
Developer Guide.

You pass the input image as base64-encoded image bytes or as a reference
to an image in an Amazon S3 bucket. If you use the Amazon CLI to call
Amazon Rekognition operations, passing image bytes is not supported. The
image must be either a PNG or JPEG formatted file.

For each object, scene, and concept the API returns one or more labels.
Each label provides the object name, and the level of confidence that the
image contains the object. For example, suppose the input image has a
lighthouse, the sea, and a rock. The response will include all three
labels, one for each object.

{Name: lighthouse, Confidence: 98.4629}

{Name: rock,Confidence: 79.2097}

{Name: sea,Confidence: 75.061}

In the preceding example, the operation returns one label for each of the
three objects. The operation can also return multiple labels for the same
object in the image. For example, if the input image shows a flower (for
example, a tulip), the operation might return the following three labels.

{Name: flower,Confidence: 99.0562}

{Name: plant,Confidence: 99.0562}

{Name: tulip,Confidence: 99.0562}

In this example, the detection algorithm more precisely identifies the
flower as a tulip.

In response, the API returns an array of labels. In addition, the
response also includes the orientation correction. Optionally, you can
specify MinConfidence to control the confidence threshold
for the labels returned. The default is 50%. You can also add the
MaxLabels parameter to limit the number of labels returned.

If the object detected is a person, the operation doesn't provide the
same facial details that the DetectFaces operation provides.

This is a stateless API operation. That is, the operation does not
persist any data.

This operation requires permissions to perform the
rekognition:DetectLabels action.

Parameters:

detectLabelsRequest -

Returns:

detectLabelsResult The response from the DetectLabels service
method, as returned by Amazon Rekognition.

detectModerationLabels

Detects explicit or suggestive adult content in a specified JPEG or PNG
format image. Use DetectModerationLabels to moderate images
depending on your requirements. For example, you might want to filter
images that contain nudity, but not images containing suggestive content.

To filter images, use the labels returned by
DetectModerationLabels to determine which types of content
are appropriate.

For information about moderation labels, see Detecting Unsafe Content in
the Amazon Rekognition Developer Guide.

You pass the input image either as base64-encoded image bytes or as a
reference to an image in an Amazon S3 bucket. If you use the Amazon CLI
to call Amazon Rekognition operations, passing image bytes is not
supported. The image must be either a PNG or JPEG formatted file.

Parameters:

detectModerationLabelsRequest -

Returns:

detectModerationLabelsResult The response from the
DetectModerationLabels service method, as returned by Amazon
Rekognition.

detectText

Detects text in the input image and converts it into machine-readable
text.

Pass the input image as base64-encoded image bytes or as a reference to
an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon
Rekognition operations, you must pass it as a reference to an image in an
Amazon S3 bucket. For the AWS CLI, passing image bytes is not supported.
The image must be either a .png or .jpeg formatted file.

The DetectText operation returns text in an array of
elements, TextDetections. Each TextDetection
element provides information about a single word or line of text that was
detected in the image.

A word is one or more ISO basic latin script characters that are not
separated by spaces. DetectText can detect up to 50 words in
an image.

A line is a string of equally spaced words. A line isn't necessarily a
complete sentence. For example, a driver's license number is detected as
a line. A line ends when there is no aligned text after it. Also, a line
ends when there is a large gap between words, relative to the length of
the words. This means, depending on the gap between words, Amazon
Rekognition may detect multiple lines in text aligned in the same
direction. Periods don't represent the end of a line. If a sentence spans
multiple lines, the DetectText operation returns multiple
lines.

To determine whether a TextDetection element is a line of
text or a word, use the TextDetection object
Type field.

To be detected, text must be within +/- 30 degrees orientation of the
horizontal axis.

For more information, see DetectText in the Amazon Rekognition Developer
Guide.

Parameters:

detectTextRequest -

Returns:

detectTextResult The response from the DetectText service method,
as returned by Amazon Rekognition.

getCelebrityInfo

Gets the name and additional information about a celebrity based on his
or her Rekognition ID. The additional information is returned as an array
of URLs. If there is no additional information about the celebrity, this
list is empty.

For more information, see Recognizing Celebrities in an Image in the
Amazon Rekognition Developer Guide.

This operation requires permissions to perform the
rekognition:GetCelebrityInfo action.

Parameters:

getCelebrityInfoRequest -

Returns:

getCelebrityInfoResult The response from the GetCelebrityInfo
service method, as returned by Amazon Rekognition.

getCelebrityRecognition

Gets the celebrity recognition results for a Amazon Rekognition Video
analysis started by .

Celebrity recognition in a video is an asynchronous operation. Analysis
is started by a call to which returns a job identifier (
JobId). When the celebrity recognition operation finishes,
Amazon Rekognition Video publishes a completion status to the Amazon
Simple Notification Service topic registered in the initial call to
StartCelebrityRecognition. To get the results of the
celebrity recognition analysis, first check that the status value
published to the Amazon SNS topic is SUCCEEDED. If so, call
GetCelebrityDetection and pass the job identifier (
JobId) from the initial call to
StartCelebrityDetection.

For more information, see Working With Stored Videos in the Amazon
Rekognition Developer Guide.

GetCelebrityRecognition returns detected celebrities and the
time(s) they are detected in an array (Celebrities) of
objects. Each CelebrityRecognition contains information
about the celebrity in a object and the time, Timestamp, the
celebrity was detected.

GetCelebrityRecognition only returns the default facial
attributes (BoundingBox, Confidence,
Landmarks, Pose, and Quality). The
other facial attributes listed in the Face object of the
following response syntax are not returned. For more information, see
FaceDetail in the Amazon Rekognition Developer Guide.

By default, the Celebrities array is sorted by time
(milliseconds from the start of the video). You can also sort the array
by celebrity by specifying the value ID in the
SortBy input parameter.

The CelebrityDetail object includes the celebrity identifer
and additional information urls. If you don't store the additional
information urls, you can get them later by calling with the celebrity
identifer.

No information is returned for faces not recognized as celebrities.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of
NextToken in the operation response contains a pagination
token for getting the next set of results. To get the next page of
results, call GetCelebrityDetection and populate the
NextToken request parameter with the token value returned
from the previous call to GetCelebrityRecognition.

Parameters:

getCelebrityRecognitionRequest -

Returns:

getCelebrityRecognitionResult The response from the
GetCelebrityRecognition service method, as returned by Amazon
Rekognition.

getContentModeration

Gets the content moderation analysis results for a Amazon Rekognition
Video analysis started by .

Content moderation analysis of a video is an asynchronous operation. You
start analysis by calling . which returns a job identifier (
JobId). When analysis finishes, Amazon Rekognition Video
publishes a completion status to the Amazon Simple Notification Service
topic registered in the initial call to
StartContentModeration. To get the results of the content
moderation analysis, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call
GetCelebrityDetection and pass the job identifier (
JobId) from the initial call to
StartCelebrityDetection.

For more information, see Working with Stored Videos in the Amazon
Rekognition Devlopers Guide.

GetContentModeration returns detected content moderation
labels, and the time they are detected, in an array,
ModerationLabels, of objects.

By default, the moderated labels are returned sorted by time, in
milliseconds from the start of the video. You can also sort them by
moderated label by specifying NAME for the
SortBy input parameter.

Since video analysis can return a large number of results, use the
MaxResults parameter to limit the number of labels returned
in a single call to GetContentModeration. If there are more
results than specified in MaxResults, the value of
NextToken in the operation response contains a pagination
token for getting the next set of results. To get the next page of
results, call GetContentModeration and populate the
NextToken request parameter with the value of
NextToken returned from the previous call to
GetContentModeration.

For more information, see Detecting Unsafe Content in the Amazon
Rekognition Developer Guide.

Parameters:

getContentModerationRequest -

Returns:

getContentModerationResult The response from the
GetContentModeration service method, as returned by Amazon
Rekognition.

getFaceDetection

Gets face detection results for a Amazon Rekognition Video analysis
started by .

Face detection with Amazon Rekognition Video is an asynchronous
operation. You start face detection by calling which returns a job
identifier (JobId). When the face detection operation
finishes, Amazon Rekognition Video publishes a completion status to the
Amazon Simple Notification Service topic registered in the initial call
to StartFaceDetection. To get the results of the face
detection operation, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call and pass the job
identifier (JobId) from the initial call to
StartFaceDetection.

GetFaceDetection returns an array of detected faces (
Faces) sorted by the time the faces were detected.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of
NextToken in the operation response contains a pagination
token for getting the next set of results. To get the next page of
results, call GetFaceDetection and populate the
NextToken request parameter with the token value returned
from the previous call to GetFaceDetection.

Parameters:

getFaceDetectionRequest -

Returns:

getFaceDetectionResult The response from the GetFaceDetection
service method, as returned by Amazon Rekognition.

getFaceSearch

Gets the face search results for Amazon Rekognition Video face search
started by . The search returns faces in a collection that match the
faces of persons detected in a video. It also includes the time(s) that
faces are matched in the video.

Face search in a video is an asynchronous operation. You start face
search by calling to which returns a job identifier (JobId).
When the search operation finishes, Amazon Rekognition Video publishes a
completion status to the Amazon Simple Notification Service topic
registered in the initial call to StartFaceSearch. To get
the search results, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call
GetFaceSearch and pass the job identifier (
JobId) from the initial call to StartFaceSearch
.

For more information, see Searching Faces in a Collection in the Amazon
Rekognition Developer Guide.

The search results are retured in an array, Persons, of
objects. EachPersonMatch element contains details about the
matching faces in the input collection, person information (facial
attributes, bounding boxes, and person identifer) for the matched person,
and the time the person was matched in the video.

GetFaceSearch only returns the default facial attributes (
BoundingBox, Confidence, Landmarks, Pose, and Quality). The other facial
attributes listed in the Face object of the following
response syntax are not returned. For more information, see FaceDetail in
the Amazon Rekognition Developer Guide.

By default, the Persons array is sorted by the time, in
milliseconds from the start of the video, persons are matched. You can
also sort by persons by specifying INDEX for the
SORTBY input parameter.

Parameters:

getFaceSearchRequest -

Returns:

getFaceSearchResult The response from the GetFaceSearch service
method, as returned by Amazon Rekognition.

getLabelDetection

Gets the label detection results of a Amazon Rekognition Video analysis
started by .

The label detection operation is started by a call to which returns a job
identifier (JobId). When the label detection operation
finishes, Amazon Rekognition publishes a completion status to the Amazon
Simple Notification Service topic registered in the initial call to
StartlabelDetection. To get the results of the label
detection operation, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call and pass the job
identifier (JobId) from the initial call to
StartLabelDetection.

GetLabelDetection returns an array of detected labels (
Labels) sorted by the time the labels were detected. You can
also sort by the label name by specifying NAME for the
SortBy input parameter.

The labels returned include the label name, the percentage confidence in
the accuracy of the detected label, and the time the label was detected
in the video.

Use MaxResults parameter to limit the number of labels returned. If there
are more results than specified in MaxResults, the value of
NextToken in the operation response contains a pagination
token for getting the next set of results. To get the next page of
results, call GetlabelDetection and populate the
NextToken request parameter with the token value returned
from the previous call to GetLabelDetection.

Parameters:

getLabelDetectionRequest -

Returns:

getLabelDetectionResult The response from the GetLabelDetection
service method, as returned by Amazon Rekognition.

getPersonTracking

Gets the person tracking results of a Amazon Rekognition Video analysis
started by .

The person detection operation is started by a call to
StartPersonTracking which returns a job identifier (
JobId). When the person detection operation finishes, Amazon
Rekognition Video publishes a completion status to the Amazon Simple
Notification Service topic registered in the initial call to
StartPersonTracking.

To get the results of the person tracking operation, first check that the
status value published to the Amazon SNS topic is SUCCEEDED.
If so, call and pass the job identifier (JobId) from the
initial call to StartPersonTracking.

GetPersonTracking returns an array, Persons, of
tracked persons and the time(s) they were tracked in the video.

GetPersonTracking only returns the default facial attributes
(BoundingBox, Confidence,
Landmarks, Pose, and Quality). The
other facial attributes listed in the Face object of the
following response syntax are not returned.

For more information, see FaceDetail in the Amazon Rekognition Developer
Guide.

By default, the array is sorted by the time(s) a person is tracked in the
video. You can sort by tracked persons by specifying INDEX
for the SortBy input parameter.

Use the MaxResults parameter to limit the number of items
returned. If there are more results than specified in
MaxResults, the value of NextToken in the
operation response contains a pagination token for getting the next set
of results. To get the next page of results, call
GetPersonTracking and populate the NextToken
request parameter with the token value returned from the previous call to
GetPersonTracking.

Parameters:

getPersonTrackingRequest -

Returns:

getPersonTrackingResult The response from the GetPersonTracking
service method, as returned by Amazon Rekognition.

indexFaces

Detects faces in the input image and adds them to the specified
collection.

Amazon Rekognition does not save the actual faces detected. Instead, the
underlying detection algorithm first detects the faces in the input
image, and for each face extracts facial features into a feature vector,
and stores it in the back-end database. Amazon Rekognition uses feature
vectors when performing face match and search operations using the and
operations.

If you are using version 1.0 of the face detection model,
IndexFaces indexes the 15 largest faces in the input image.
Later versions of the face detection model index the 100 largest faces in
the input image. To determine which version of the model you are using,
check the the value of FaceModelVersion in the response from
IndexFaces.

For more information, see Model Versioning in the Amazon Rekognition
Developer Guide.

If you provide the optional ExternalImageID for the input
image you provided, Amazon Rekognition associates this ID with all faces
that it detects. When you call the operation, the response returns the
external ID. You can use this external image ID to create a client-side
index to associate the faces with each image. You can then use the index
to find all faces in an image.

In response, the operation returns an array of metadata for all detected
faces. This includes, the bounding box of the detected face, confidence
value (indicating the bounding box contains a face), a face ID assigned
by the service for each face that is detected and stored, and an image ID
assigned by the service for the input image. If you request all facial
attributes (using the detectionAttributes parameter, Amazon
Rekognition returns detailed facial attributes such as facial landmarks
(for example, location of eye and mouth) and other facial attributes such
gender. If you provide the same image, specify the same collection, and
use the same external ID in the IndexFaces operation, Amazon
Rekognition doesn't save duplicate face metadata.

For more information, see Adding Faces to a Collection in the Amazon
Rekognition Developer Guide.

The input image is passed either as base64-encoded image bytes or as a
reference to an image in an Amazon S3 bucket. If you use the Amazon CLI
to call Amazon Rekognition operations, passing image bytes is not
supported. The image must be either a PNG or JPEG formatted file.

This operation requires permissions to perform the
rekognition:IndexFaces action.

Parameters:

indexFacesRequest -

Returns:

indexFacesResult The response from the IndexFaces service method,
as returned by Amazon Rekognition.

listFaces

Returns metadata for faces in the specified collection. This metadata
includes information such as the bounding box coordinates, the confidence
(that the bounding box contains a face), and face ID. For an example, see
Listing Faces in a Collection in the Amazon Rekognition Developer Guide.

This operation requires permissions to perform the
rekognition:ListFaces action.

Parameters:

listFacesRequest -

Returns:

listFacesResult The response from the ListFaces service method,
as returned by Amazon Rekognition.

recognizeCelebrities

Returns an array of celebrities recognized in the input image. For more
information, see Recognizing Celebrities in the Amazon Rekognition
Developer Guide.

RecognizeCelebrities returns the 100 largest faces in the
image. It lists recognized celebrities in the CelebrityFaces
array and unrecognized faces in the UnrecognizedFaces array.
RecognizeCelebrities doesn't return celebrities whose faces
are not amongst the largest 100 faces in the image.

For each celebrity recognized, the RecognizeCelebrities
returns a Celebrity object. The Celebrity
object contains the celebrity name, ID, URL links to additional
information, match confidence, and a ComparedFace object
that you can use to locate the celebrity's face on the image.

Rekognition does not retain information about which images a celebrity
has been recognized in. Your application must store this information and
use the Celebrity ID property as a unique identifier for the
celebrity. If you don't store the celebrity name or additional
information URLs returned by RecognizeCelebrities, you will
need the ID to identify the celebrity in a call to the operation.

You pass the imput image either as base64-encoded image bytes or as a
reference to an image in an Amazon S3 bucket. If you use the Amazon CLI
to call Amazon Rekognition operations, passing image bytes is not
supported. The image must be either a PNG or JPEG formatted file.

For an example, see Recognizing Celebrities in an Image in the Amazon
Rekognition Developer Guide.

This operation requires permissions to perform the
rekognition:RecognizeCelebrities operation.

Parameters:

recognizeCelebritiesRequest -

Returns:

recognizeCelebritiesResult The response from the
RecognizeCelebrities service method, as returned by Amazon
Rekognition.

searchFaces

For a given input face ID, searches for matching faces in the collection
the face belongs to. You get a face ID when you add a face to the
collection using the IndexFaces operation. The operation compares
the features of the input face with faces in the specified collection.

You can also search faces without indexing faces by using the
SearchFacesByImage operation.

The operation response returns an array of faces that match, ordered by
similarity score with the highest similarity first. More specifically, it
is an array of metadata for each face match that is found. Along with the
metadata, the response also includes a confidence value for
each face match, indicating the confidence that the specific face matches
the input face.

For an example, see Searching for a Face Using Its Face ID in the Amazon
Rekognition Developer Guide.

This operation requires permissions to perform the
rekognition:SearchFaces action.

Parameters:

searchFacesRequest -

Returns:

searchFacesResult The response from the SearchFaces service
method, as returned by Amazon Rekognition.

searchFacesByImage

For a given input image, first detects the largest face in the image, and
then searches the specified collection for matching faces. The operation
compares the features of the input face with faces in the specified
collection.

To search for all faces in an input image, you might first call the
operation, and then use the face IDs returned in subsequent calls to the
operation.

You can also call the DetectFaces operation and use the
bounding boxes in the response to make face crops, which then you can
pass in to the SearchFacesByImage operation.

You pass the input image either as base64-encoded image bytes or as a
reference to an image in an Amazon S3 bucket. If you use the Amazon CLI
to call Amazon Rekognition operations, passing image bytes is not
supported. The image must be either a PNG or JPEG formatted file.

The response returns an array of faces that match, ordered by similarity
score with the highest similarity first. More specifically, it is an
array of metadata for each face match found. Along with the metadata, the
response also includes a similarity indicating how similar
the face is to the input face. In the response, the operation also
returns the bounding box (and a confidence level that the bounding box
contains a face) of the face that Amazon Rekognition used for the input
image.

For an example, Searching for a Face Using an Image in the Amazon
Rekognition Developer Guide.

This operation requires permissions to perform the
rekognition:SearchFacesByImage action.

Parameters:

searchFacesByImageRequest -

Returns:

searchFacesByImageResult The response from the SearchFacesByImage
service method, as returned by Amazon Rekognition.

Amazon Rekognition Video can detect celebrities in a video must be stored
in an Amazon S3 bucket. Use Video to specify the bucket name and
the filename of the video. StartCelebrityRecognition returns
a job identifier (JobId) which you use to get the results of
the analysis. When celebrity recognition analysis is finished, Amazon
Rekognition Video publishes a completion status to the Amazon Simple
Notification Service topic that you specify in
NotificationChannel. To get the results of the celebrity
recognition analysis, first check that the status value published to the
Amazon SNS topic is SUCCEEDED. If so, call and pass the job
identifier (JobId) from the initial call to
StartCelebrityRecognition.

For more information, see Recognizing Celebrities in the Amazon
Rekognition Developer Guide.

Parameters:

startCelebrityRecognitionRequest -

Returns:

startCelebrityRecognitionResult The response from the
StartCelebrityRecognition service method, as returned by Amazon
Rekognition.

startContentModeration

Starts asynchronous detection of explicit or suggestive adult content in
a stored video.

Amazon Rekognition Video can moderate content in a video stored in an
Amazon S3 bucket. Use Video to specify the bucket name and the
filename of the video. StartContentModeration returns a job
identifier (JobId) which you use to get the results of the
analysis. When content moderation analysis is finished, Amazon
Rekognition Video publishes a completion status to the Amazon Simple
Notification Service topic that you specify in
NotificationChannel.

To get the results of the content moderation analysis, first check that
the status value published to the Amazon SNS topic is
SUCCEEDED. If so, call and pass the job identifier (
JobId) from the initial call to
StartContentModeration.

For more information, see Detecting Unsafe Content in the Amazon
Rekognition Developer Guide.

Parameters:

startContentModerationRequest -

Returns:

startContentModerationResult The response from the
StartContentModeration service method, as returned by Amazon
Rekognition.

Amazon Rekognition Video can detect faces in a video stored in an Amazon
S3 bucket. Use Video to specify the bucket name and the filename
of the video. StartFaceDetection returns a job identifier (
JobId) that you use to get the results of the operation.
When face detection is finished, Amazon Rekognition Video publishes a
completion status to the Amazon Simple Notification Service topic that
you specify in NotificationChannel. To get the results of
the label detection operation, first check that the status value
published to the Amazon SNS topic is SUCCEEDED. If so, call
and pass the job identifier (JobId) from the initial call to
StartFaceDetection.

For more information, see Detecting Faces in a Stored Video in the Amazon
Rekognition Developer Guide.

Parameters:

startFaceDetectionRequest -

Returns:

startFaceDetectionResult The response from the StartFaceDetection
service method, as returned by Amazon Rekognition.

startFaceSearch

Starts the asynchronous search for faces in a collection that match the
faces of persons detected in a stored video.

The video must be stored in an Amazon S3 bucket. Use Video to
specify the bucket name and the filename of the video.
StartFaceSearch returns a job identifier (JobId
) which you use to get the search results once the search has completed.
When searching is finished, Amazon Rekognition Video publishes a
completion status to the Amazon Simple Notification Service topic that
you specify in NotificationChannel. To get the search
results, first check that the status value published to the Amazon SNS
topic is SUCCEEDED. If so, call and pass the job identifier
(JobId) from the initial call to
StartFaceSearch. For more information, see
procedure-person-search-videos.

Parameters:

startFaceSearchRequest -

Returns:

startFaceSearchResult The response from the StartFaceSearch
service method, as returned by Amazon Rekognition.

Amazon Rekognition Video can detect labels in a video. Labels are
instances of real-world entities. This includes objects like flower,
tree, and table; events like wedding, graduation, and birthday party;
concepts like landscape, evening, and nature; and activities like a
person getting out of a car or a person skiing.

The video must be stored in an Amazon S3 bucket. Use Video to
specify the bucket name and the filename of the video.
StartLabelDetection returns a job identifier (
JobId) which you use to get the results of the operation.
When label detection is finished, Amazon Rekognition Video publishes a
completion status to the Amazon Simple Notification Service topic that
you specify in NotificationChannel.

To get the results of the label detection operation, first check that the
status value published to the Amazon SNS topic is SUCCEEDED.
If so, call and pass the job identifier (JobId) from the
initial call to StartLabelDetection.

Parameters:

startLabelDetectionRequest -

Returns:

startLabelDetectionResult The response from the
StartLabelDetection service method, as returned by Amazon
Rekognition.

Amazon Rekognition Video can track persons in a video stored in an Amazon
S3 bucket. Use Video to specify the bucket name and the filename
of the video. StartPersonTracking returns a job identifier (
JobId) which you use to get the results of the operation.
When label detection is finished, Amazon Rekognition publishes a
completion status to the Amazon Simple Notification Service topic that
you specify in NotificationChannel.

To get the results of the person detection operation, first check that
the status value published to the Amazon SNS topic is
SUCCEEDED. If so, call and pass the job identifier (
JobId) from the initial call to
StartPersonTracking.

Parameters:

startPersonTrackingRequest -

Returns:

startPersonTrackingResult The response from the
StartPersonTracking service method, as returned by Amazon
Rekognition.

startStreamProcessor

Starts processing a stream processor. You create a stream processor by
calling . To tell StartStreamProcessor which stream
processor to start, use the value of the Name field
specified in the call to CreateStreamProcessor.

Parameters:

startStreamProcessorRequest -

Returns:

startStreamProcessorResult The response from the
StartStreamProcessor service method, as returned by Amazon
Rekognition.

AmazonClientException - If any internal errors are encountered
inside the client while attempting to make the request or
handle the response. For example if a network connection is
not available.

AmazonServiceException - If an error response is returned by Amazon
Rekognition indicating either a problem with the data in the
request, or a server side issue.

shutdown

void shutdown()

Shuts down this client object, releasing any resources that might be held
open. This is an optional method, and callers are not expected to call
it, but can if they want to explicitly release any open resources. Once a
client has been shutdown, it should not be used to make any more
requests.

getCachedResponseMetadata

Returns additional metadata for a previously executed successful request,
typically used for debugging issues where a service isn't acting as
expected. This data isn't considered part of the result data returned by
an operation, so it's available through this separate, diagnostic
interface.

Response metadata is only cached for a limited period of time, so if you
need to access this extra diagnostic information for an executed request,
you should use this method to retrieve it as soon as possible after
executing a request.

Parameters:

request - The originally executed request.

Returns:

The response metadata for the specified request, or null if none
is available.