Route 53 launched DNS Failover on February 11, 2013. With DNS Failover, Route 53 can detect an outage of your website and redirect your end users to alternate or backup locations that you specify. Route 53 DNS Failover relies on health checks—regularly making Internet requests to your application’s endpoints from multiple locations around the world—to determine whether each endpoint of your application is up or down.

Until today, it was difficult to use DNS Failover if your application was running behind ELB to balance your incoming traffic across EC2 instances, because there was no way configure Route 53 health checks against an ELB endpoint—to create a health check, you need to specify an IP address to check, and ELBs don’t have fixed IP addresses.

What’s different about DNS Failover for ELB?
Determining the health of an ELB endpoint is more complex than health checking a single IP address. For example, what if your application is running fine on EC2, but the load balancer itself isn’t reachable? Or if your load balancer and your EC2 instances are working correctly, but a bug in your code causes your application to crash? Or how about if the EC2 instances in one Availability Zone of a multi-AZ ELB are experiencing problems?

Route 53 DNS Failover handles all of these failure scenarios by integrating with ELB behind the scenes. Once enabled, Route 53 automatically configures and manages health checks for individual ELB nodes. Route 53 also takes advantage of the EC2 instance health checking that ELB performs (information on configuring your ELB health checks is available here). By combining the results of health checks of your EC2 instances and your ELBs, Route 53 DNS Failover is able to evaluate the health of the load balancer and the health of the application running on the EC2 instances behind it. In other words, if any part of the stack goes down, Route 53 detects the failure and routes traffic away from the failed endpoint.

A nice bonus is that, because you don’t create any health checks of your own, DNS Failover for ELB endpoints is available at no additional charge—you aren’t charged for any health checks.

When setting up DNS Failover for an ELB Endpoint, you simply set “Evaluate Target Health” to true—you don’t create a health check of your own for this endpoint:

Scenarios Possible with DNS Failover
Using Route 53 DNS Failover, you can run your primary application simultaneously in multiple AWS regions around the world and fail over across regions. Your end users will be routed to the closest (by latency), healthy region for your application. Route 53 automatically removes from service any region where your application is unavailable—it will pull an endpoint out service if there’s a region-wide connectivity or operational issue, if your application goes down in that region, or if your ELB or EC2 instances go down in that region.

You can also leverage a simple backup site hosted on Amazon S3, with Route 53 directing users to this backup site in the event that your application becomes unavailable. In February we published a tutorial on how to create a simple backup website. Now you can take advantage of this simple backup scenario if your primary website is running behind an ELB—just skip the part of the tutorial about creating a health check for your primary site, and instead create an Alias record pointing to your ELB and check the “evaluate target health” option on the Alias record (full documentation on using DNS Failover with ELB is available in the Route 53 Developer Guide.

Jeff Wierer, Principal Product Manager on the AWS Identity and Access
Management (IAM) team sent along a guest post to introduce a powerful new federation feature.

-- Jeff;

In a previous blog post we discussed how AWS Identity and Access Management (IAM) supports identity
federation by allowing developers to grant temporary security credentials
to users managed outside of AWS. Today we’re expanding this capability with
support for web identity federation. Web identity federation simplifies the
development of cloud-backed applications that use public identity providers
such as Facebook, Google, or the newly launched Login with Amazon service for authentication.
For those of you not yet familiar with Login with Amazon, it's a new service you can
use to securely connect your websites and apps with millions of Amazon.com customers.
If you’re interested in learning more about Login with Amazon, please visit their
launch page.

Web identity federation enables
your users to sign in to your app using their Amazon.com, Facebook, or Google
identity and authorize them to seamlessly access AWS
resources that are managed under your AWS account. If you are building a
mobile or a client-based application, you can now integrate these three popular identity providers and
authorize users without any server-side code and without distributing long-term
credentials with the app. To support this scenario, this release introduces a
new AWS Security Token Service (STS) API, AssumeRoleWithWebIdentity. This API lets you request
temporary security credentials for your customers who have been authenticated
by Amazon.com, Facebook, or Google. Your app can then use the temporary
security credentials to access AWS resources such as Amazon Simple Storage
Service (S3) objects, DynamoDB tables, or Amazon Simple Queue Service queues.

Let's walk through an example use case.

Imagine you’re developing a mobile app that uses the new Login with Amazon
service for authentication, and part of the app’s functionality allows end
users to upload an image file as their personal avatar. Behind the scenes, you
want to store those images as objects in one of your S3 buckets. To enable this,
you need to configure a role that is used to delegate access to users of your app. Roles are configured in two parts:

A trust policy that specifies a trusted entity (principal)—that is, who can assume the role. In this case, the trusted entity is any authenticated Amazon.com user.

An access policy with permissions that specify what the user can do.

Setting up a trust policy for
Login with AmazonFirst we’ll create the trust policy. For this example, let’s assume you’ve
registered your app with Login with Amazon and you’ve been assigned an
application identifier (app_id) of amzn1.app.123456SAMPLE. This application ID uniquely identifies your
app with Login with Amazon. (If you register the app with Facebook or Google, you'll
get a different app ID from each of these providers.) To delegate access to Amazon.com
users, the trust policy for your role needs to include the new federated
principal type www.amazon.comand
the app_id
value. The following trust policy allows any Amazon.com user who has
authenticated using your app to call the sts:AssumeRoleWithWebIdentity
API and assume the role.

Notice that we have introduced a new type of key that can be used in policy conditions—we
now have support for identity-provider–specific keys for Amazon.com, Facebook,
and Google. In this case, the www.amazon.com:app_id key ensures that
the request is coming from your app by doing a string comparison against your app's
registered ID. (To read more about the new policy keys that support web
identity federation, see Creating
Temporary Security Credentials for Mobile Apps Using Public Identity Providers
in the AWS STS guide.)

Next, let’s create an access policy for the role.

Sample Policy Allowing S3 Access

The following access policy grants end users of your app limited access to
an S3 bucket named myBucket in your AWS account. Specifically, it grants
every user read-only access (Get*) to a shared folder whose prefix matches
the app_id
value, and it grants read, write, and delete access (GetObject, PutObject,
and DeleteObject)
to a folder whose prefix matches the user ID provided by Amazon. Notice that
the new identity provider keys described in the previous section, such as ${www.amazon.com:app_id},
can also be used as AWS
access control policy variables. In this case, we’re also using ${www.amazon.com:user_id},
which contains the user’s unique Amazon.com ID. (Remember, when using a
variable you must always set the Version element to 2012-10-17.)

Putting it All TogetherNow that the role has been created, let’s take a look at how the role will
be used to provide federated access to your end users. The following diagram
illustrates the steps that occur to authenticate an Amazon.com user and
authorize the user to access objects in an S3 bucket.

First, the user needs to be authenticated. Using the Login with Amazon SDK, the app authenticates the user and receives a token from Amazon.

Next, the user needs to be authorized to access resources in your AWS account. The app makes an unsigned AssumeRoleWithWebIdentity request to STS, passing the token from the previous step. STS verifies the authenticity of the token; if the token is valid, STS returns a set of temporary security credentials to the app. By default, the credentials can be used for one hour.

Finally, the app uses the temporary security credentials to make signed requests for resources in your S3 bucket. Because the role's access policy used variables that reference the app ID and the user ID, the temporary security credentials are scoped to that end user and will prevent him or her from accessing objects owned by other users.

This is just one example of how web identity federation can be used. A
similar flow can also be enabled for Facebook or Google identities by integrating
their SDKs with your app and simply creating an additional role. For more information
about web identity federation and how to create roles that support other identity
providers, please see Creating
Temporary Security Credentials for Mobile Apps Using Public Identity Providers
in the AWS STS guide.

When you travel as much as I do, you tend to think about getting sick or injured while away from home. I don't worry too much about the quality of the care that's available in the developed world, but I do worry about being able to explain my symptoms and to understand the diagnosis and the preferred treatment.

The ProblemRyan Frankel of VerbalizeIt found himself in this very predicament while traveling in China a few years ago. He was violently ill and was not able to communicate with the pharmacist to understand what he had been prescribed. After he regained his health, he realized that there was an opportunity to use technology to connect anyone in need of translation services with a worldwide network of multilingual people. He investigated what was available and saw that there was market gap. On the one side, machine translations were quick and often free, but not especially accurate. On the other side, dedicated call centers provided accurate results at a high cost.

The SolutionRyan decided to fill in the gap and created VerbalizeIt, a human-powered translation service, with the goal of providing call center quality at prices more often associated with machine translation. He wanted to create a Software as a Service (Saas) platform and a market that would allow people to monetize their translation skills.

He expanded his team and they took part in TechStars. Things started taking off, they rebuilt their platform on top of AWS to increase scalability, and then applied and were selected to appear on the season finale of ABC's hit show Shark Tank. The episode was shown on May 17th, during the all-important "sweeps week," to an estimated 7 to 10 million viewers.

Ready to ScaleThe team at VerbalizeIt knew that they had a one-time opportunity to make their service known to millions of TV viewers. The web site had to scale, not just once, but in each time zone and then again for re-runs. A team of AWS Solutions Architects reviewed the design and the implementation to make sure that it made efficient use of EC2, S3, and CloudFront.

The VerbalizeIt also worked to scale the human side, making sure that they had access to enough qualified interpreters to efficiently handle a large increase in demand.

I asked Ryan and his team at VerbalizeIt to capture some metrics and some graphs to give a behind the scenes look at what happens when your web site is featured on prime time television. Here's what they saw:

VerbalizeIt experienced nearly a 7x increase in unique visits, a 7x increase in page visits and a more than 6x increase in page views.

VerbalizeIt for DevelopersDuring my conversation with Ryan I learned that VerbalizeIt is also a platform and that it has APIs for developers! The API is simple and effective. After you get an API key, you can call APIs for voice and text translation. The voice translation API facilitates a phone call to a suitable translator. You start by calling the languages function. It returns a list of language names and the associated codes. Then you call the live_translation_requests function with the desired pair of languages and a phone number. The function will return another phone number, which will be expecting a call from the one passed to the function.

My counterpart in Germany, AWS Evangelist Steffen Krause, has put together the blog post below to show you how to launch and exercise SAP HANA One on AWS. Steffen is also responsible for the German language AWSAktuell blog.

-- Jeff;

An
interesting software product that created a lot of buzz lately is SAP HANA. It
is a database based on in-memory technology. The data to be analyzed are put
into memory in compressed form, which allows for very fast analytics.

But SAP
HANA is sold by SAP as an appliance, a combination of hard- and software. So to
use it, there is a big initial investment. SAP HANA One on the other hand is a
version of SAP HANA on Amazon Web Services, so everybody can test and use this
technology and develop software for it.

SAP HANA
One is available on AWS Marketplace with hourly billing. The software
currently costs 0.99 cents per hour, plus the cost of a cc2.8xlarge AWS
instance and the required EBS storage. Until May 30th, there is a
promotion where you can get 120$ in credits if you use at least 10 hours of SAP
HANA One. This video shows the deployment of SAP HANA One on AWS:

After
deployment, your HANA instance needs to be configured. To do this, you access
the instance using https://ip-address using
the Elastic IP address (EIP) that you assigned during deployment. First, you
need to enter AWS credentials (Access Key and Secret Access Key) of an AWS
account. This information is used by HANA to configure itself on AWS and for operations
like backup. I recommend against using your AWS root account credentials. Instead,
you should create an IAM user and assign this user to a group with Power Users
permissions. Then enter the credentials of this newly created IAM account in
the HANA console. You can easily delete this IAM account after testing SAP HANA
One. The following video shows the configuration of SAP HANA One on AWS:

After
configuring the HANA instance, you can use the product. Either you use SAP HANA
Studio for software development (a sample of an android app is here) or you use
the graphical analytics tool SAP Visual Intelligence. You can find a trial of
this tool in the download tab of the console. In SAP Visual Intelligence, you
connect to your HANA instance using the assigned Elastic IP, the instance id
00, the username system and the password that you assigned to system. Then you
select one of the sample datasets to analyze it interactively. The following
demo shows how this works:

After
testing SAP HANA One you should clean up so you stop paying for unused
resources. You should:

Amazon RDS for MySQL has long had the ability to create Read Replicas. You can do this with a couple of clicks in the AWS Management Console. Each read replica runs as a slave to the master database instance. Under certain circumstances, replication can stop. This can happen if you cause a replication error by running DML queries on the replica that conflict with the updates made on the master and also under other circumstances.

To help you detect and respond to this situation, Amazon RDS now monitors the replication status of your Read Replicas and sets the Replication State field to Error if replication stops for any reason. You can learn more about the error by inspecting the Replication Error field.

The status is now visible in the AWS Management Console:

If
you deem that you can safely skip the error, you can run
the CALL mysql.rds_skip_repl_error command. Otherwise, you can delete the
Read Replica and create a new one with the same DB Instance Identifier (so that
the Endpoint remains the same as that of your old read replica). If a replication
error is fixed, the Replication State
changes to Replicating.

You can also use Amazon
RDS Event Notifications to automatically get notified when you encounter a
replication error. You can also monitor
the Replication Lag metric and set up
a CloudWatch alarm to receive a notification when the lag crosses a particular
threshold tolerable by your application.

AWS
has achieved FedRAMP compliance – now federal agencies can save significant time, costs and
resources in their evaluation of AWS! After demonstrating adherence to
hundreds of controls by providing thousands of artifacts as part of a
security assessment, AWS has been certified by a FedRAMP-accredited
third-party assessor (3PAO) and has achieved agency ATOs (Authority to Operate)
demonstrating that AWS complies with the stringent FedRAMP requirements.

Numerous U.S. government agencies, systems
integrators and other companies that provide products and services to the U.S.
government are using AWS services today. Now all U.S. government agencies can save significant
time, costs and resources by leveraging the
AWS Department of Health and Human Services (HHS) ATO packages in the FedRAMP repository to evaluate AWS for their
applications and workloads, provide their own authorizations to use AWS, and
transition workloads into the AWS environment. Agencies and federal
contractors can immediately request access to the AWS FedRAMP package by
submitting a FedRAMP
Package Access Request Form and begin to moving through the authorization
process to achieve an ATO with AWS.

What is FedRAMP? Check-out the answer to this and other frequently
asked questions on the AWS
FedRAMP FAQ site.

We released the Amazon Elastic Transcoder with an initial set of features and a promise to iterate quickly based on customer feedback. You've supplied us with plenty of feedback (primarily via the Elastic Transcoder Forum) and have a set of powerful enhancements ready as a result.

Here's what's new:

Apple HTTP Live Streaming (HLS) Support. Amazon Elastic Transcoder can create HLS-compliant pre-segmented files and playlists for delivery to compatible players on iOS and Android devices, set-top boxes and web browsers. You can use our new system-defined HLS presets to transcode an input file into adaptive-bitrate filesets for targeting multiple devices, resolutions and bitrates.
You can also create your own presets.

WebM Output Support. Amazon Elastic Transcoder can now transcode content into VP8 video and Vorbis audio, for playback in browsers, like Firefox, that do not natively support H.264 and AAC.

Multiple Outputs Per Job. Amazon Elastic Transcoder can now produce multiple renditions of the same input from a single transcoding job. For example, with a single job you can create H.264, HLS and WebM versions of the same video for delivery to multiple platforms, which is easier than creating multiple jobs and saves you time.

Automatic Video Bit rate Optimization. With this feature, Amazon Elastic Transcoder will automatically adjust the bit rate in order to optimize the visual quality of your transcoded output. This takes the guesswork out of choosing the right bit rate for your video content.

Enhanced Aspect Ratio and Sizing Policies. You can use these new settings in transcoding presets
to precisely control scaling, cropping, matting and stretching options to
get the output that you expect regardless of how the input is formatted.

Enhanced S3 Options for Output Videos. Amazon
Elastic Transcoder now enables you to set S3 Access Control Lists (ACLs) and
storage type options without needing to use the Amazon S3 API or console. By
using this feature, your files are then created with the right permissions
in-place, ready for delivery to end-users.

We continue to make improvements, large and small, to Amazon DynamoDB. In addition to a new parallel scan feature, you can now change your provisioned throughput more quickly. We are also changing the way that we measure read capacity in a way that will reduce your costs by up to 4x for certain types of queries and scans.

Parallel ScansAs you may know, DynamoDB stores your data across multiple physical storage partitions for rapid access. the throughput of a DynamoDB Scan operation is constrained by the maximum throughput of a single partition. In some cases, this means that a Scan cannot take advantage of the table's full provisioned read capacity.

In order to give you the ability to retrieve data from your DynamoDB tables more rapidly, we are introducing a new parallel scan model today. To make use of this feature, you will need to run multiple worker threads or processes in parallel. Each worker will be able to scan a separate segment of a table concurently with the other workers. DynamoDB's Scan function now accepts two additional parameters:

TotalSegments denotes the number of workers that will access the table concurrently.

Segment denotes the segment of table to be accessed by the calling worker.

Let's say you have 4 workers. You would issue the following calls simultaneously to initiate a parallel scan:

Scan(TotalSegments=4, Segment=0, ...)

Scan(TotalSegments=4, Segment=1, ...)

Scan(TotalSegments=4, Segment=2, ...)

Scan(TotalSegments=4, Segment=3, ...)

The two parameters, when used together, limit the scan to a particular block of items in the table. You can also use the existing Limit parameter to control how much data is returned by an individual Scan request.

The AWS SDK for Java comes with high-level support for parallel scan. DynamoDBMapper implements a new method parallelScan, which handles threading and pagination of individual segments, which makes it even easier to try out this new feature.

Provisioned Throughput ChangesYou can now change the provisioned throughput of a particular DynamoDB table up to four times per day (the previous limit was twice per day). This will allow you to react more quickly to changes in load.

Read Capacity MeteringWe are changing the way that we measure read capacity. With this change, a single read capacity unit will allow you to do 1 read per second for an item up to 4 KB (formerly 1 KB). In other words, larger
reads cost one-fourth as much as they did before.

This change is being rolled out across all AWS Regions over the next
week. Don't be alarmed if you see that your consumed capacity graph
shows a lot less capacity than before.

With this change, scanning your DynamoDB table, running queries against the tables, copying data to Redshift using the DynamoDB/Redshift integration, or using Elastic MapReduce to query or export your tables, are all more cost-effective than ever before.

I hope that you can make good use of the new parallel scan model, and that the other two changes are of value to you as well.