Current Status - Aug 2, 2015 PDT

Amazon Web Services publishes our most up-to-the-minute information on service availability
in the table below. Check back here any time to get current status information, or subscribe to an RSS feed to
be notified of interruptions to each individual service. If you are experiencing a real-time, operational issue
with one of our services that is not described below, please inform us by clicking on the "Contact Us"
link to submit a service issue report. All dates and times are Pacific Time (PST/PDT).

Status History

Amazon Web Services keeps a running log of all service interruptions that we
publish in the table below for the past year. Mouse over any of the status icons below to see a detailed
incident report (click on the icon to persist the popup). Click on the arrow buttons at the top of the table to
move forward and backwards through the calendar. All dates and times are Pacific Time (PST/PDT).

10:51 PM PDT We are investigating increased faults and latencies for CloudWatch APIs and metrics in the US-EAST-1 Region. CloudWatch alarms may transition into "INSUFFICIENT_DATA" state if set on delayed metrics.

11:20 PM PDT We can confirm an elevated rate of API faults and delays in processing some alarms in US-EAST-1 Region. We are actively working to resolve the issue.

Jul 31, 1:00 AM PDT We have resolved the elevated alarms API faults in the US-EAST-1 Region and continue to work on resolving delays in creating new metrics.

Jul 31, 1:05 AM PDT Between 10:12 PM PDT on 7/30 and 12:50 AM PDT on 7/31, customers experienced elevated alarms API faults, delayed alarms, and delays in creating new metrics in the US-EAST-1 Region. We have restored service and CloudWatch is operating normally.

[RESOLVED] Increased API Error rates

10:30 PM PDT We are investigating increased error rates impacting Pay, Reserve and Settle APIs in the North America region.

11:12 PM PDT Between 09:15 PM and 10:46 PM PDT, the Flexible Payment Service experienced increased error rates in transaction Processing APIs in the North America Region. The issue has been resolved and the service is now operating normally.

[RESOLVED] Increased API Error Rates

1:58 AM PDT Between 9:50 PM on 7/30 and 12:15 AM PDT on 7/31 we experienced increased error rates and latencies for API calls in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated email-sending API error rates

10:37 PM PDT We are currently investigating elevated error rates for our email-sending APIs in the US-EAST-1 Region. This includes the SendEmail/SendRawEmail APIs as well as calls made to the SMTP endpoint.

10:57 PM PDT We are continuing to investigate elevated error rates for our email-sending APIs in the US-EAST-1 Region.

Jul 31, 12:28 AM PDT Between 10:11 PM on 7/30 and 12:14 AM PDT on 7/31 we experienced elevated error rates in our email-sending APIs in the US-EAST-1 Region. This included the SendEmail/SendRawEmail APIs as well as calls made to the SMTP endpoint. Messages successfully submitted to SES during that time are being delivered with a delay. The issue has been resolved and the service is operating normally.

[RESOLVED] Increased error rates

10:42 PM PDT We are investigating increased error rates for Publish calls in the US-EAST-1 Region.

Jul 31, 12:19 AM PDT We have identified the root cause of the increased error rates for Publish API calls in the US-EAST-1 Region and continue to work to resolve the issue.

Jul 31, 12:38 AM PDT We have identified the root cause of the increased error rates for Publish API calls in the US-EAST-1 Region and are starting to see recovery.

Jul 31, 12:49 AM PDT Between 10:08 PM on 7/30 and 12:43 AM PDT on 7/31 we experienced significantly elevated Publish API error rates in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Increased error rates

10:27 PM PDT We are investigating increased error rates for Send and Receive API calls in the US-EAST-1 Region.

10:55 PM PDT We continue to investigate increased error rates for Send and Receive API calls in the US-EAST-1 Region.

Jul 31, 12:09 AM PDT We have identified the root cause of the increased error rates for Send and Receive API calls in the US-EAST-1 Region and continue to work to resolve the issue.

Jul 31, 12:17 AM PDT Between 10:11 PM on 7/30 and 12:14 AM PDT on 7/31 we experienced significantly elevated error rates for Send and Receive API calls in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Increased API error rates

10:53 PM PDT We are investigating increased error rates and latencies for the AWS CloudFormation APIs in the US-EAST-1 Region.

11:58 PM PDT We are continuing to investigate increased error rates and latencies for the AWS CloudFormation APIs in the US-EAST-1 Region.

Jul 31, 12:54 AM PDT Between 10:05 PM on 7/30 and 12:43 AM PDT on 7/31, CloudFormation experienced increased error rates and latencies for API calls in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

Jul 31, 12:57 AM PDT Between 10:12 PM on 7/30 and 12:35 AM PDT on 7/31, AWS CloudTrail experienced delays in delivering events in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated latencies

11:32 PM PDT We are investigating elevated latencies in processing of configuration changes in the US-EAST-1 Region.

Jul 31, 12:57 AM PDT Between 10:12 PM on 07/30 and 12:35 AM PDT on 07/31, AWS Config experienced elevated latencies in processing of configuration changes in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Increased API error rates

11:01 PM PDT We are investigating increased error rates and latencies for the AWS Elastic Beanstalk APIs in the US-EAST-1 Region.

11:45 PM PDT We are continuing to investigate increased error rates and latencies for the AWS Elastic Beanstalk APIs in the US-EAST-1 Region.

Jul 31, 12:59 AM PDT Between 10:12 PM on 07/30 and 12:40 AM PDT on 07/31, AWS Elastic Beanstalk experienced increased error rates and latencies in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated error rates

10:44 PM PDT We are currently experiencing elevated error rates for the AWS Management Console.

[RESOLVED] Increased API error rates

10:59 PM PDT We are currently investigating elevated error rates and increased latencies for APIs in the US-EAST-1 Region.

11:46 PM PDT We are continuing to investigate increased error rates and latencies for AWS Lambda in the US-EAST-1 Region.

Jul 31, 12:22 AM PDT Between 10:11 PM on 7/30 and 12:14 AM PDT on 7/31 we experienced increased error rates and latencies for API calls in the US-EAST-1 Region. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated error rates

10:49 PM PDT We are currently experiencing elevated error rates for the AWS Management Console.

Jul 31, 12:23 AM PDT We have identified the root cause of elevated error rates for the AWS Management Console and are starting to see recovery.

Jul 31, 1:04 AM PDT Between 10:12 PM on 07/30 and 12:43 AM PDT on 07/31, we experienced elevated error rates for the AWS Management Console. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated login error rates

10:04 PM PDT We are currently experiencing elevated error rates affecting customers logging into the Console. Customers who have already logged into the Console are not affected.

10:47 PM PDT We continue to investigate elevated error rates affecting customers logging in to the Console using root accounts. Customers who have already logged in to the Console are not impacted.

11:08 PM PDT Between 09:15 PM and 10:46 PM PDT, the AWS Management Console experienced elevated error rates affecting customers logging in to the Console using root accounts. Customers who had already logged in to the Console were not impacted. The issue has been resolved and the AWS Management Console is now operating normally.

[RESOLVED] Delays in command execution

11:28 PM PDT We are currently investigating increases in API fault rates and delays in command execution for AWS OpsWorks.

Jul 31, 12:27 AM PDT We have identified the root cause of elevated error rates and command execution delays for AWS OpsWorks and are starting to see recovery.

Jul 31, 1:27 AM PDT Between 10:11 PM on 7/30 and 12:45 AM PDT on 7/31 we experienced increased error rates and delays in command execution. We are working on recovering a small number of stuck instances. Customers using the auto-healing feature might have seen instances being restarted. The issue has been resolved and the service is operating normally.

Amazon CloudFront

Amazon CloudSearch
(Sao Paulo)

Amazon CloudWatch
(Sao Paulo)

Amazon DynamoDB
(Sao Paulo)

EC2
(Sao Paulo)

ELB
(Sao Paulo)

EMR
(Sao Paulo)

Amazon ElastiCache
(Sao Paulo)

RDS
(Sao Paulo)

Amazon Route 53

Amazon Route 53 Private DNS
(Sao Paulo)

SNS
(Sao Paulo)

SQS
(Sao Paulo)

S3
(Sao Paulo)

SWF
(Sao Paulo)

Amazon SimpleDB
(Sao Paulo)

VPC
(Sao Paulo)

Auto Scaling
(Sao Paulo)

AWS CloudFormation
(Sao Paulo)

AWS CloudTrail
(Sao Paulo)

AWS Config
(Sao Paulo)

AWS Direct Connect
(Sao Paulo)

AWS Elastic Beanstalk
(Sao Paulo)

AWS IAM
(Sao Paulo)

AWS Key Management Service
(Sao Paulo)

AWS Management Console

AWS Storage Gateway
(Sao Paulo)

Aug 1

Jul 31

Jul 30

Jul 29

Jul 28

Jul 27

Jul 26

[RESOLVED] Elevated error rates

10:49 PM PDT We are currently experiencing elevated error rates for the AWS Management Console.

Jul 31, 12:23 AM PDT We have identified the root cause of elevated error rates for the AWS Management Console and are starting to see recovery.

Jul 31, 1:04 AM PDT Between 10:12 PM on 07/30 and 12:43 AM PDT on 07/31, we experienced elevated error rates for the AWS Management Console. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated login error rates

10:04 PM PDT We are currently experiencing elevated error rates affecting customers logging into the Console. Customers who have already logged into the Console are not affected.

10:47 PM PDT We continue to investigate elevated error rates affecting customers logging in to the Console using root accounts. Customers who have already logged in to the Console are not impacted.

11:08 PM PDT Between 09:15 PM and 10:46 PM PDT, the AWS Management Console experienced elevated error rates affecting customers logging in to the Console using root accounts. Customers who had already logged in to the Console were not impacted. The issue has been resolved and the AWS Management Console is now operating normally.

Amazon API Gateway
(Ireland)

Amazon CloudFront

Amazon CloudSearch
(Frankfurt)

Amazon CloudSearch
(Ireland)

Amazon CloudWatch
(Frankfurt)

Amazon CloudWatch
(Ireland)

Amazon Cognito
(Ireland)

Amazon DynamoDB
(Frankfurt)

Amazon DynamoDB
(Ireland)

Amazon ECS
(Ireland)

EC2
(Frankfurt)

EC2
(Ireland)

ELB
(Frankfurt)

ELB
(Ireland)

EMR
(Frankfurt)

EMR
(Ireland)

Amazon Elastic Transcoder
(Ireland)

Amazon ElastiCache
(Frankfurt)

Amazon ElastiCache
(Ireland)

Amazon Glacier
(Frankfurt)

Amazon Glacier
(Ireland)

Amazon Kinesis
(Frankfurt)

Amazon Kinesis
(Ireland)

Amazon Redshift
(Frankfurt)

Amazon Redshift
(Ireland)

RDS
(Frankfurt)

RDS
(Ireland)

Amazon Route 53

Amazon Route 53 Private DNS
(Ireland)

Amazon Simple Email Service
(Ireland)

SNS
(Frankfurt)

SNS
(Ireland)

SQS
(Frankfurt)

SQS
(Ireland)

S3
(Frankfurt)

S3
(Ireland)

SWF
(Frankfurt)

SWF
(Ireland)

Amazon SimpleDB
(Ireland)

VPC
(Frankfurt)

VPC
(Ireland)

Amazon WorkDocs
(Ireland)

Amazon WorkSpaces
(Ireland)

Auto Scaling
(Frankfurt)

Auto Scaling
(Ireland)

AWS CloudFormation
(Frankfurt)

AWS CloudFormation
(Ireland)

AWS CloudHSM
(Frankfurt)

AWS CloudHSM
(Ireland)

AWS CloudTrail
(Frankfurt)

AWS CloudTrail
(Ireland)

AWS CodeDeploy
(Ireland)

AWS Config
(Frankfurt)

AWS Config
(Ireland)

AWS Data Pipeline
(Ireland)

AWS Direct Connect
(Frankfurt)

AWS Direct Connect
(Ireland)

AWS Directory Service
(Ireland)

AWS Elastic Beanstalk
(Frankfurt)

AWS Elastic Beanstalk
(Ireland)

AWS IAM
(Frankfurt)

AWS IAM
(Ireland)

AWS Import/Export

AWS Key Management Service
(Frankfurt)

AWS Key Management Service
(Ireland)

AWS Lambda
(Ireland)

AWS Management Console

AWS Storage Gateway
(Frankfurt)

AWS Storage Gateway
(Ireland)

Aug 1

Jul 31

Jul 30

Jul 29

Jul 28

Jul 27

Jul 26

[RESOLVED] Elevated error rates

10:49 PM PDT We are currently experiencing elevated error rates for the AWS Management Console.

Jul 31, 12:23 AM PDT We have identified the root cause of elevated error rates for the AWS Management Console and are starting to see recovery.

Jul 31, 1:04 AM PDT Between 10:12 PM on 07/30 and 12:43 AM PDT on 07/31, we experienced elevated error rates for the AWS Management Console. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated login error rates

10:04 PM PDT We are currently experiencing elevated error rates affecting customers logging into the Console. Customers who have already logged into the Console are not affected.

10:47 PM PDT We continue to investigate elevated error rates affecting customers logging in to the Console using root accounts. Customers who have already logged in to the Console are not impacted.

11:08 PM PDT Between 09:15 PM and 10:46 PM PDT, the AWS Management Console experienced elevated error rates affecting customers logging in to the Console using root accounts. Customers who had already logged in to the Console were not impacted. The issue has been resolved and the AWS Management Console is now operating normally.

Amazon CloudFront

Amazon CloudSearch
(Singapore)

Amazon CloudSearch
(Sydney)

Amazon CloudSearch
(Tokyo)

Amazon CloudWatch
(Singapore)

Amazon CloudWatch
(Sydney)

Amazon CloudWatch
(Tokyo)

Amazon DynamoDB
(Singapore)

Amazon DynamoDB
(Sydney)

Amazon DynamoDB
(Tokyo)

Amazon ECS
(Sydney)

Amazon ECS
(Tokyo)

EC2
(Singapore)

EC2
(Sydney)

EC2
(Tokyo)

ELB
(Singapore)

ELB
(Sydney)

ELB
(Tokyo)

EMR
(Singapore)

EMR
(Sydney)

EMR
(Tokyo)

Amazon Elastic Transcoder
(Singapore)

Amazon Elastic Transcoder
(Tokyo)

Amazon ElastiCache
(Singapore)

Amazon ElastiCache
(Sydney)

Amazon ElastiCache
(Tokyo)

Amazon Glacier
(Sydney)

Amazon Glacier
(Tokyo)

Amazon Kinesis
(Singapore)

Amazon Kinesis
(Sydney)

Amazon Kinesis
(Tokyo)

Amazon Redshift
(Singapore)

Amazon Redshift
(Sydney)

Amazon Redshift
(Tokyo)

RDS
(Singapore)

RDS
(Sydney)

RDS
(Tokyo)

Amazon Route 53

Amazon Route 53 Private DNS
(Singapore)

Amazon Route 53 Private DNS
(Sydney)

Amazon Route 53 Private DNS
(Tokyo)

SNS
(Singapore)

SNS
(Sydney)

SNS
(Tokyo)

SQS
(Singapore)

SQS
(Sydney)

SQS
(Tokyo)

S3
(Singapore)

S3
(Sydney)

S3
(Tokyo)

SWF
(Singapore)

SWF
(Sydney)

SWF
(Tokyo)

Amazon SimpleDB
(Singapore)

Amazon SimpleDB
(Sydney)

Amazon SimpleDB
(Tokyo)

VPC
(Singapore)

VPC
(Sydney)

VPC
(Tokyo)

Amazon WorkDocs
(Singapore)

Amazon WorkDocs
(Sydney)

Amazon WorkDocs
(Tokyo)

Amazon WorkSpaces
(Singapore)

Amazon WorkSpaces
(Sydney)

Amazon WorkSpaces
(Tokyo)

Auto Scaling
(Singapore)

Auto Scaling
(Sydney)

Auto Scaling
(Tokyo)

AWS CloudFormation
(Singapore)

AWS CloudFormation
(Sydney)

AWS CloudFormation
(Tokyo)

AWS CloudHSM
(Singapore)

AWS CloudHSM
(Sydney)

AWS CloudHSM
(Tokyo)

AWS CloudTrail
(Singapore)

AWS CloudTrail
(Sydney)

AWS CloudTrail
(Tokyo)

AWS CodeDeploy
(Sydney)

AWS Config
(Singapore)

AWS Config
(Sydney)

AWS Config
(Tokyo)

AWS Data Pipeline
(Sydney)

AWS Data Pipeline
(Tokyo)

AWS Direct Connect
(Singapore)

AWS Direct Connect
(Sydney)

AWS Direct Connect
(Tokyo)

AWS Directory Service
(Singapore)

AWS Directory Service
(Sydney)

AWS Directory Service
(Tokyo)

AWS Elastic Beanstalk
(Singapore)

AWS Elastic Beanstalk
(Sydney)

AWS Elastic Beanstalk
(Tokyo)

AWS IAM
(Singapore)

AWS IAM
(Sydney)

AWS IAM
(Tokyo)

AWS Import/Export

AWS Key Management Service
(Singapore)

AWS Key Management Service
(Sydney)

AWS Key Management Service
(Tokyo)

AWS Management Console

AWS Storage Gateway
(Singapore)

AWS Storage Gateway
(Sydney)

AWS Storage Gateway
(Tokyo)

Aug 1

Jul 31

Jul 30

Jul 29

Jul 28

Jul 27

Jul 26

[RESOLVED] Elevated error rates

10:49 PM PDT We are currently experiencing elevated error rates for the AWS Management Console.

Jul 31, 12:23 AM PDT We have identified the root cause of elevated error rates for the AWS Management Console and are starting to see recovery.

Jul 31, 1:04 AM PDT Between 10:12 PM on 07/30 and 12:43 AM PDT on 07/31, we experienced elevated error rates for the AWS Management Console. The issue has been resolved and the service is operating normally.

[RESOLVED] Elevated login error rates

10:04 PM PDT We are currently experiencing elevated error rates affecting customers logging into the Console. Customers who have already logged into the Console are not affected.

10:47 PM PDT We continue to investigate elevated error rates affecting customers logging in to the Console using root accounts. Customers who have already logged in to the Console are not impacted.

11:08 PM PDT Between 09:15 PM and 10:46 PM PDT, the AWS Management Console experienced elevated error rates affecting customers logging in to the Console using root accounts. Customers who had already logged in to the Console were not impacted. The issue has been resolved and the AWS Management Console is now operating normally.