MAY BADA BHANTE BE WELL AND SECUREMAY HE LIVE LONGAS HE IS FOR THE WELFARE OF ALL SENTIENT AND NON-SENTIENT BEINGS TO BE EVER HAPPYHE ALWAYS HAS CALM, QUIET, ALERT, ATTENTIVE AND EQUANIMITY MIND WITH A CLEAR UNDERSTANDING THATEVERYTHING IS CHANGING.

All of us seriously practice meditation, puja, and dedicate merits and metta to Bada Bhanteji for his good health.

Namo Buddhaya!!!

Dear Upasakas, Upasikas and Friends,

We are sorry to inform you that Bada Bhanteji had epileptic attack on 13th
June and he is in hospital under treatment. His sodium level also goes
often low. His condition is stable and recovery is very slow. Doctors
and we are all optimistic about his recovery. This is for your kind
information.

Kindly practice meditation, puja, and dedicate merits and metta to Bada Bhanteji for his good health.

Amazon Web Services, Products, Tools, and Developer Information…

The Amazon RDS
team has been rolling out features at a very rapid pace! Today we are
giving you the ability to upgrade existing RDS database instances from
MySQL 5.1 to MySQL 5.5 using our new Major Version Upgrade feature.

MySQL 5.5 includes several features and performance benefits over MySQL
that may be of interest to you including enhanced multicore scaling, better use of I/O capacity, and enhanced monitoring by means of the performance schema.
MySQL 5.5 defaults to version 1.1 of the InnoDB Plugin, which improves
on version 1.0 (the default for MySQL 5.1) by adding faster recovery,
multiple buffer pool instances, and asynchronous I/O.

Today’s RDS release will make it easy for you to upgrade. You simply
select the instance in the AWS Management Console, choose the Modify
option, and select the latest version of MySQL 5.5 (you cannot upgrade
to earlier versions). You can choose to apply the upgrade immediately or
during the next maintenance window for the instance. In either case,
your instance will be unavailable for a few minutes while the upgrade
completes and the instance is rebooted.

Launch a test database instance and verify that your application
runs as expected. Amazon RDS makes this easy: you can create a snapshot
of your running instance, create a new instance from the snapshot,
upgrade it to MySQL 5.5 using the Modify command, and do your testing.

Late last year I wrote about the new NASDAQ OMX FinQloud platform. FinQloud provides customers with cost-effective and efficient management, storage and processing of financial data.

Today I’d like to tell you about an even higher-level platform that has been built on top of FinQloud and AWS.

Tradier’s Brokerage in a BoxTradier
has built the first “brokerage cloud” platform for tool providers,
online banks, and wealth management firms. These organizations will no
longer have to lease data center space or procure their own technology
infrastructure, giving them the ability to run their business within a
secure, scalable, and cost-effective framework.

Tradier was able to take advantage of FinQloud to build this
platform. For example, Tradier Brokerage (a subsidiary of Tradier) will
use FinQloud’s Regulatory Records Retention
(R3) to meet data storage requirements for books and records. Notably,
R3 is the first cloud-based Write Once Read Many (WORM) compliant data
storage system, as required by SEC Rule 17a-4 and CFTC Regulation 1.31.
They’ll also use FinQloud’s search and retrieval application for that
data.

Mobile app providers, developers of tools to implement trading and
options strategies, along with algorithmic and robotic traders will all
find the Tradier platform
to be of interest. As is often the case with the announcements that I
have made in this blog, this platform frees up resources that would
otherwise be devoted to the development of lower-level facilities, and
enabling innovation at the higher levels.

Tradier’s platform includes a number of foundation components
including a Brokerage Cloud API, access to real-time market data,
streaming and request/response APIs, support for browsers and mobile
devices, and a marketplace / ecosystem for investment tools.

We are also adding a new option to allow you to opt for delivery of
SNS messages in raw format, in addition to the existing JSON format.
This
is useful if you are using SNS in conjunction with SQS transmit
identical
copies of a message to multiple queues.

Here’s some more information for you:

256KB Payloads (SQS and SNS) allows you to send and
receive more data with each API call. Previously, payloads were capped at
64KB. Now, large payloads are billed as one request per 64KB ‘chunk’ of
payload. For example, a single API call for a 256KB payload will be
billed as four requests. Our customers tell us larger payloads will enable new
use cases that were previously difficult to accomplish.

SNS Raw Message Delivery allows you to pack even more
information content into your messaging payloads. When delivering
notifications to SQS and HTTP endpoints, SNS today adds JSON encoding with
metadata about the current message and topic. Now, developers can set the
optional RawMessageDelivery property to disable this added JSON encoding.
Raw message delivery is off by default, to ensure existing applications
continue to behave as expected.

AWS Identity and Access Management is very powerful and very flexible. My colleague Elliot Yamaguchi has written a blog post
that shows you how to use IAM to create a policy which implements
folder-level permissions within an Amazon S3 bucket. By using this
policy, you can allow hundreds of users to safely share a single bucket,
restricting each one to a particular folder within the bucket.

The post contains a complete explanation of the policy. You can use it as-is or you can customize it as needed.

I have a triple dose of videos for you today! The AWS team is
growing at a rapid pace and we’re looking for great people to fill many
different positions. In order to give you a better sense for the jobs
and the kinds of people that you’d be working with, I spent some time
interviewing some of my colleagues. I’ll be publishing the videos over
the course of the next couple of weeks.

I interviewed members of our professional services and development
teams. I also interviewed the leader of our support organization. The AWS Careers Page
contains additional information about each of the job families. We have
open positions in North and South America, Europe, Africa, and the
Asia-Pacific parts of the world.

If you would like to apply for any of the jobs, please use the email
address associated with the job family. I’d also like to ask you to take
a moment to fill out the survey.

Professional ServicesI spoke with Matt Tavis to learn about his responsibilities as a member of the AWS Professional Services team:

Compliance with
FedRAMP℠ is a complex process with a high bar for a provider’s security
practices. Because few providers have secured an Authority To Operate (ATO)
under FedRAMP, and FedRAMP in general is very new, the topic often leaves many confused. So, we wanted to build upon our press release, security blog post, and AWS blog post to briefly clarify a few points.

FedRAMP is a U.S.
government-wide program that provides a standardized approach to security
assessment, authorization, and continuous monitoring for cloud products and
services. With the award of this ATO, AWS has demonstrated it can meet the
extensive FedRAMP security requirements and as a result, an even wider range of
federal, state and local government customers can leverage AWS’s secure
environment to store, process, and protect a diverse array of sensitive
government data. Leveraging the HHS authorization, all U.S. government agencies
now have the information they need to evaluate AWS for their applications and
workloads, provide their own authorizations to use AWS, and transition
workloads into the AWS environment.

On May 21, 2013,
AWS announced that AWS GovCloud (US) and all U.S. AWS Regions received an
Agency Authority to Operate (ATO) from the U.S. Department of Health and Human
Services (HHS) under the Federal Risk and Authorization Management Program
(FedRAMP) requirements at the Moderate impact level. Two separate FedRAMP
Agency ATOs have been issued; one encompassing the AWS GovCloud (US) Region, and the other covering the AWS US
East/West Regions. These ATOs cover Amazon EC2, Amazon S3, Amazon VPC, and
Amazon EBS. Beyond the services covered in the ATO, customers can evaluate
their workloads for suitability with other AWS services. AWS plans to onboard
other AWS services in the future. Interested customers can contact AWS Sales and Business Development for a detailed discussion of
security controls and risk acceptance considerations.

The FedRAMP audit
was one of the most in-depth and rigorous security audits in the history of
AWS, and that includes the many previous rigorous audits that are outlined on
the AWS Compliance page. The FedRAMP audit was a comprehensive,
six-month assessment of 298 controls including:

The architecture and operating processes of all services in scope.

The security of human processes and administrative access to systems.

The security and physical environmental controls of our AWS GovCloud
(US), AWS US East (Northern Virginia), AWS US West (Northern
California), and AWS US West (Oregon) Regions

The underlying IAM and other security services.

The security of networking infrastructure.

The security posture of the hypervisor, kernel and base operating systems.

Third-Party penetration testing.

Extensive onsite auditor interviews with service teams.

Nearly 1,500 individual evidence files.

The
award of this FedRAMP Agency ATO enables agencies and federal contractors to
immediately request access to the AWS Agency ATO packages by submitting a FedRAMP Package Access Request Form and begin to move
through the authorization process to achieve an ATO using AWS. Additional
information on FedRAMP, including the FedRAMP Concept of Operations (CONOPS)
and Guide to Understanding FedRAMP, can be found at http://www.fedramp.gov .

It is important
to note that while FedRAMP applies formally only to U.S. government agencies,
the rigorous audit process and the resulting detailed
documentation benefit all AWS customers. Many of our
commercial and enterprise customers, as well as public sector customers outside
the U.S., have expressed their excitement about this important new
certification. All AWS customers will benefit from the FedRAMP process without
any change to AWS prices or the way that they receive and utilize our services.

You can
visit http://aws.amazon.com/compliance/ to learn more about the AWS and FedRAMP or
the multitude of other compliance evaluations of the AWS platform such as SOC
1, SOC 2, SOC 3, ISO 27001, FISMA, DIACAP, ITAR, FIPS 140-2, CSA, MPAA, PCI
DSS Level 1, HIPAA
and others.

The EBS Snapshot Copy feature gives you the power to copy EBS
snapshots across AWS Regions. Effective today, we have made the snapshot
copy even faster than before with support for incremental copies
between Regions. It is now practical to copy snapshots to other regions
more frequently, making it easier for you to develop applications that
are highly available.

The first time you copy an EBS snapshot of a volume to a particular
Region, all of the data will be copied. The second and subsequent
copies of snapshots from the same volume to the same destination region
will be incremental: only the data that has changed since the last copy
will be transferred. As a result, the snapshot will transfer less data
and complete more quickly than before.

The magnitude of the improvement will depend on the amount of data
that has been changed since the last snapshot copy. To give you a sense
for how much of a benefit you can expect, we measured the amount of
change between snapshots across a wide variety of EBS volumes running a
number of applications. Based on our findings, we expect to see a 50x
speedup for the second and subsequent incremental copies of an EBS
volume snapshot.

Aptean has been using EBS Snapshot Copy
since its launch in providing innovative disaster recovery solutions for
our worldwide customers. We are thrilled with the incremental support
availability as it will allow us to further reduce our recovery
objectives in providing a worldwide product solution on AWS.

I look forward to hearing more about how you leverage the faster cross-Region EBS Snapshot Copy in your own applications.

Earlier this year, we launched the cross-region EC2 AMI Copy feature, which builds on the EBS Snapshot Copy. Today’s enhancement also makes the AMI Copy faster when you copy EBS-backed AMIs.

The first time you copy an EBS
snapshot, to a particular Region, all of the data will be copied.For the second and subsequent copies of the
same volume transferred
to the same destination region [R1]will
be incremental, resulting in faster copy to the same destination Region, only
the data that has changed since the last copy will be transferred.As a result, the snapshot will transfer less
data and complete more quickly than before.Of course, the magnitude of the improvement will depend on the amount of
data that has been changed since the last snapshot copy.

[R1]Added
this to follow through in the example on the point made earlier that
incremental snapshots are specific to a region pair.

I am happy to announce that Amazon CloudFront
now supports a pair of frequently requested features: support for
custom SSL certificates and the ability to point the root of your
website to a CloudFront distribution. With support for both of these
features in place, it is now even easier for you to deliver your entire
website via CloudFront’s global network of edge locations. This includes the dynamic content, static objects, and the secured portions of your website or application.

Custom SSL CertificatesWhen
a user requests content from a web site using the HTTPS protocol, the
web server encrypts the data with a digital certificate before sending
it along. The information in the certificate identifies the source of
the content and also supplies the decryption key. Protecting content in
this way increases user confidence and trust in the site.

When
you create a CloudFront distribution, you receive a unique domain name for your
distribution – e.g. d123.cloudfront.net.
You can use this domain name directly in your URLs, or create a CNAME to
something your viewers are familiar with or that represents your brand – e.g. www.mysite.com. However,
until today you couldn’t use this CNAME to deliver your content over HTTPS as
CloudFront edge servers didn’t have the SSL certificate for your domain to hand
out to the browsers. That’s changing today!

You can now upload a SSL certificate and instruct CloudFront to use it
when handling HTTPS requests for a particular CloudFront distribution.

To get started, you need to request an invitation
on our web site. As soon as your request is approved, you can upload
your SSL certificate and use the AWS Management Console to associate it
with your distribution. Here’s what you need to do:

Purchase a Certificate from a Recognized Certificate Authority.
Your certificate must be in X.509 PEM format, and must include a
certificate chain. CloudFront supports many types of certificates
including domain validated certificates, extended validation (EV)
certificates, high assurance certificates, wildcard certificates
(*.example.com), and subject alternative name (SAN) certificates
(example.com and example.net).

Upload the Certificate to Your AWS Account. Use the IAM CLI to upload the certificate to your AWS account as follows:

Note that you must include the -p (path) option to indicate that the certificate will be used with CloudFront. Read the iam-servercertupload documentation for more information.

Map Your Domain Name to Your Distribution. Create a
CNAME record in your site’s DNS record set to map the domain or
sub-domain to the distribution’s domain name. You must also inform
CloudFront that the distribution is associated with the domain:

Associate the Certificate with the Distribution:

And that’s all it takes!

When your viewers download your content from CloudFront over an SSL
connection, their SSL connection will terminate at a CloudFront edge
location. This will remove some of the burden of SSL encryption from
your origin server. You can also configure CloudFront to use an HTTPS
connection for origin
fetches, resulting in end-to-end encryption all the way from your origin
to your users.

We charge a fixed monthly fee for each custom SSL certificate, with
pricing pro-rated to each hour of usage. More information on pricing for
the use of SSL certificates is available on the CloudFront pricing page.

Root Domain HostingYou can now use Route 53
(the AWS Domain Name Service) to configure an Alias (A) record that
maps the apex or root (e.g. “cloudfront.com”) of your domain to a
CloudFront distribution.

Once configured, Route 53 will respond to each request with the IP
address(es) of the CloudFront distribution. This will allow visitors to
easily and reliably access your web site even if they don’t specify the
customary “www” prefix.

Here’s all you need to do:

Route 53 does not charge for queries to Alias records that are mapped
to a CloudFront distribution. You can now use Route 53’s Alias records
instead of CNAME records for all domain entries that point to CloudFront
distributions. Read the Route 53 Developer Guide to learn more about this new feature.

Update:
Some of you have expressed surprise at the price tag for the use of SSL
certificates with CloudFront. With this custom SSL certificate feature,
each certificate requires one or more dedicated IP addresses at each of
our 40 CloudFront locations. This lets us give customers great
performance (using all of our edge locations), great security (using a
dedicated cert that isn’t shared with anyone else) and great
availability (no limitations as to which clients or browsers are
supported). As with any other CloudFront feature, there are no up-front
fees or professional services needed for setup. Plus, we aren’t charging
anything extra for dynamic content, which makes it a great choice for
customers who want to deliver an entire website (both static and dynamic
content) in a secure manner.

I’m happy to announce that we are lowering the price of Amazon RDS (Relational Database Service) database instances, both On-Demand and Reserved.

On-Demand prices have been reduced as much as 18% for MySQL
and Oracle BYOL (Bring Your Own License) and 28% for SQL Server BYOL.
All of your On-Demand usage will automatically be charged at the new and
lower rates effective June 1, 2013.

Reserved Instance prices have been reduced as much as 27% for MySQL
and Oracle BYOL. The new prices apply to Reserved Instance purchases
made on or after June 11, 2013.

Here is a table to illustrate the total cost of ownership for an m2.xlarge DB Instance for MySQL or Oracle BYOL using a 3-year Reserved Instance:

Region

Old Price

New Price

Savings

US East (Northern Virginia)

$4,441

$3,507

21%

US West (Northern California)

$6,044

$4,410

27%

US West (Oregon)

$4,441

$3,507

21%

AWS GovCloud (US)

$4,835

$4,217

13%

Although Reserved Instance purchases are non-refundable, we are
making a special exception for 1-year RI’s purchased in the last 30 days
and 3-year RI’s purchased in the last 90 days. For a limited time, you
can exchange recently purchased RI’s for new ones. You’ll receive a
pro-rata refund of the upfront fees that you paid at purchase time. If
you would like to exchange an RDS Reserved Instance for a new one,
simply contact us.

As you may know from my recent blog post,
we have made a lot of progress since releasing Amazon RDS just 3.5
years ago. In addition to the recently announced Service Level Agreement
(SLA) for Multi-AZ database instances, you have the ability to
provision up to 30,000 IOPS for demanding production workloads,
encryption at rest using Oracle’s Transparent Data Encryption, and
simple disaster recovery using Multi-AZ and read replicas.

The Amazon Relational Database Service
(RDS) was designed to simplify one of the most complex of all common IT
activities: managing and scaling a relational database while providing
fast, predictable performance and high availability.

RDS in ActionIn
the 3.5 years since we launched Amazon RDS, a lot has happened. Amazon
RDS is now being used in mission-critical deployments by tens of
thousands of businesses of all sizes. We now process trillions of I/O
requests each month for these customers. We’re seeing strong adoption in

RDS InnovationWe
have added support for three major database engines (MySQL, Oracle, and
SQL Server), expanded support to all nine of the AWS Regions, and added
more than 50 highly requested features. Here is a timeline to give you a
better idea of how many additions we have made to RDS since we launched
it:

Let’s take a closer look at a few of the more important features on this timeline:

Multiple Database Engines - Support for Oracle Database and Microsoft SQL Server in addition to MySQL.

Multi-AZ Deployments
- This feature enables you to create highly available database
deployments with synchronous replication across Availability Zones,
automatic failure detection and failover using just a few clicks on the
AWS Management Console.

Read Replicas
- This feature makes it easy to elastically scale out beyond the
capacity constraints of a single database instance for read-heavy
database workloads. You can promote a Read Replica to a master as needed and monitor the replication status directly through the AWS Management Console.

Provisioned IOPS
- Amazon RDS Provisioned IOPS is a storage option designed to deliver
fast, predictable, and consistent I/O performance, and is optimized for
I/O-intensive, transactional (OLTP) database workloads. You can
provision up to 3TB of storage and 30,000 IOPS per database instance.

DB Notifications via Email and SMS
- You can subscribe to receive e-mail or SMS notifications when
database events such as failover, low storage, replication state change,
and so forth occur.

As I noted above, these innovations are powering some of the world’s
most popular applications that are used by millions of users.

General Availability & The New RDS SLAWe’re marking Amazon RDS as “generally available” after adding the highly requested features described above.

With strong customer adoption across multiple market segments,
numerous new features, and plenty of operational experience behind us,
we now have a Service Level Agreement (SLA) for Amazon RDS, with 99.95%
availability for Multi-AZ database instances on a monthly basis. This
SLA is available for Amazon RDS for MySQL and Oracle database engines
because both of those engines support Multi-AZ deployment. If
availability falls below 99.95% for a Multi-AZ database instance (which
is a maximum of 22 minutes of unavailability for each database instance
per month), you will be eligible to receive service credits. The new
Amazon RDS SLA is designed to give you additional confidence to run the
most demanding and mission critical workloads dependably in the AWS
cloud. You can learn more about the SLA for Amazon RDS at http://aws.amazon.com/rds-sla.

Amazon Redshift
is now available in the Asia Pacific (Tokyo) Region. AWS customers
running in this Region can now create a fast, fully managed,
petabyte-scale data warehouse today at a price point that is a tenth
that of most traditional data warehousing solutions.

Customers Love Amazon RedshiftWe
launched Redshift in February of 2013. Since that time, we’ve signed up
well over 1000 customers, and are currently adding over a 100 or so per
week. In aggregate, we’ve enabled these customers to save millions of
dollars in capital expenditures (CAPEX).

AWS Redshift, How Amazon Changed the Game,
tells the story. Timon tested 4 data sets (2 billion to 57 billion
rows) on six Redshift cluster configurations (1 to 32 of the dw.hs1.xlarge nodes and either 2 or 4 of the dw.hs1.8xlarge
nodes). His data sets were stored in Amazon S3; he found that the
overall load time scaled linearly with cost and data size. He also
measured a variety of queries that were representative of those he runs
to generate reports, and found that performance to be impressive as
well, with near-linear scaling even as the data set grew beyond 50
billion rows, with very consistent query times. Timon also tested
Redshift’s snapshotting and resizing features, and wrapped up by noting
that:

I caused a small riot among the analysts when I mentioned off-hand how
quickly I could run these queries on substantial data sets. On a service
that I launched and loaded overnight with about three days of prior
fiddling/self-training.

We’re excited to see how customers
are benefiting from Amazon Redshift’s price, performance, and ease of use, so
please keep the blog posts coming.

Get Started TodayAmazon
Redshift is now available in the US East (Northern Virginia), US West
(Oregon), EU West (Ireland), and Asia Pacific (Tokyo) Regions, with
additional Regions coming soon.

I’m happy to announce that the following EC2 instance types are
now available in the Asia Pacific (Tokyo) Region and that you can start
using them today:

Cluster Compute Eight Extra Large (cc2.8xlarge) - With
60.5 GiB of RAM, a pair of Intel Xeon E5-2670 processors, and 3.3 TB of
instance storage, the very high CPU performance and cluster networking
features of this instance type make it a great fit for applications such
as analytics, encoding, renderings, and High Performance Computing
(HPC).

High Memory Cluster Eight Extra Large Instance (cr1.8xlarge)
- Featuring 244 GiB of RAM, dual Xeon E5-2670’s, and 240 GB of SSD
instance storage, you can run memory-intensive analytics, databases, HPC
workloads, and other memory-bound applications on these instances.

High I/O Quadruple Extra Large (hi1.4xlarge) - 60.5
GiB of RAM and 2 TB of SSD storage, along with 16 virtual cores make
this instance a perfect host for transactional systems and NoSQL
databases like Cassandra and MongoDB that can benefit from very high
random I/O performance.

High Storage Eight Extra Large (hs1.8xlarge) - 117
GiB of RAM, 48 TB of instance storage (24 drives, each with 2 TB), and
16 virtual cores provide high sequential I/O performance across very
large data sets. You can build a data warehouse, run Hadoop jobs, and
host cluster file systems on these instances.

All of the instances listed above also include 10 Gigabit Ethernet
networking and feature very high network I/O performance. You can learn
more about them on the EC2 Instance Types page. You may also find the EC2 Instance Types table handy.

Route 53 launched DNS Failover on February 11, 2013.
With DNS Failover, Route 53 can detect an outage of your website and
redirect your end users to alternate or backup locations that you
specify. Route 53 DNS Failover relies on health checks—regularly making
Internet requests to your application’s endpoints from multiple
locations around the world—to determine whether each endpoint of your
application is up or down.

Until today, it was difficult to use DNS Failover if your application
was running behind ELB to balance your incoming traffic across EC2
instances, because there was no way configure Route 53 health checks
against an ELB endpoint—to create a health check, you need to specify an
IP address to check, and ELBs don’t have fixed IP addresses.

What’s different about DNS Failover for ELB?
Determining the health of an ELB endpoint is more complex than health
checking a single IP address. For example, what if your application is
running fine on EC2, but the load balancer itself isn’t reachable? Or if
your load balancer and your EC2 instances are working correctly, but a
bug in your code causes your application to crash? Or how about if the
EC2 instances in one Availability Zone of a multi-AZ ELB are
experiencing problems?

Route 53 DNS Failover handles all of these failure scenarios by
integrating with ELB behind the scenes. Once enabled, Route 53
automatically configures and manages health checks for individual ELB
nodes. Route 53 also takes advantage of the EC2 instance health checking
that ELB performs (information on configuring your ELB health checks is
available here).
By combining the results of health checks of your EC2 instances and
your ELBs, Route 53 DNS Failover is able to evaluate the health of the
load balancer and the health of the application running on the EC2
instances behind it. In other words, if any part of the stack goes down,
Route 53 detects the failure and routes traffic away from the failed
endpoint.

A nice bonus is that, because you don’t create any health checks of your
own, DNS Failover for ELB endpoints is available at no additional
charge—you aren’t charged for any health checks.

When setting up DNS Failover for an ELB Endpoint, you simply set
“Evaluate Target Health” to true—you don’t create a health check of your
own for this endpoint:

Scenarios Possible with DNS Failover
Using Route 53 DNS Failover, you can run your primary application
simultaneously in multiple AWS regions around the world and fail over
across regions. Your end users will be routed to the closest (by
latency), healthy region for your application. Route 53 automatically
removes from service any region where your application is unavailable—it
will pull an endpoint out service if there’s a region-wide connectivity
or operational issue, if your application goes down in that region, or
if your ELB or EC2 instances go down in that region.

You can also leverage a simple backup site hosted on Amazon S3, with
Route 53 directing users to this backup site in the event that your
application becomes unavailable. In February we published a tutorial
on how to create a simple backup website. Now you can take advantage of
this simple backup scenario if your primary website is running behind
an ELB—just skip the part of the tutorial about creating a health check
for your primary site, and instead create an Alias record pointing to
your ELB and check the “evaluate target health” option on the Alias
record (full documentation on using DNS Failover with ELB is available
in the Route 53 Developer Guide.

Jeff Wierer, Principal Product Manager on the AWS Identity and Access
Management (IAM) team sent along a guest post to introduce a powerful new federation feature.

– Jeff;

In a previous blog post we discussed how AWS Identity and Access Management (IAM) supports identity
federation by allowing developers to grant temporary security credentials
to users managed outside of AWS. Today we’re expanding this capability with
support for web identity federation. Web identity federation simplifies the
development of cloud-backed applications that use public identity providers
such as Facebook, Google, or the newly launched Login with Amazon service for authentication.
For those of you not yet familiar with Login with Amazon, it’s a new service you can
use to securely connect your websites and apps with millions of Amazon.com customers.
If you’re interested in learning more about Login with Amazon, please visit their
launch page.

Web identity federation enables
your users to sign in to your app using their Amazon.com, Facebook, or Google
identity and authorize them to seamlessly access AWS
resources that are managed under your AWS account. If you are building a
mobile or a client-based application, you can now integrate these three popular identity providers and
authorize users without any server-side code and without distributing long-term
credentials with the app. To support this scenario, this release introduces a
new AWS Security Token Service (STS) API, AssumeRoleWithWebIdentity. This API lets you request
temporary security credentials for your customers who have been authenticated
by Amazon.com, Facebook, or Google. Your app can then use the temporary
security credentials to access AWS resources such as Amazon Simple Storage
Service (S3) objects, DynamoDB tables, or Amazon Simple Queue Service queues.

Let’s walk through an example use case.

Imagine you’re developing a mobile app that uses the new Login with Amazon
service for authentication, and part of the app’s functionality allows end
users to upload an image file as their personal avatar. Behind the scenes, you
want to store those images as objects in one of your S3 buckets. To enable this,
you need to configure a role that is used to delegate access to users of your app. Roles are configured in two parts:

A trust policy that specifies a trusted entity (principal)—that is,
who can assume the role. In this case, the trusted entity is any
authenticated Amazon.com user.

An access policy with permissions that specify what the user can do.

Setting up a trust policy for
Login with AmazonFirst we’ll create the trust policy. For this example, let’s assume you’ve
registered your app with Login with Amazon and you’ve been assigned an
application identifier (app_id) of amzn1.app.123456SAMPLE. This application ID uniquely identifies your
app with Login with Amazon. (If you register the app with Facebook or Google, you’ll
get a different app ID from each of these providers.) To delegate access to Amazon.com
users, the trust policy for your role needs to include the new federated
principal type www.amazon.comand
the app_id
value. The following trust policy allows any Amazon.com user who has
authenticated using your app to call the sts:AssumeRoleWithWebIdentity
API and assume the role.

Notice that we have introduced a new type of key that can be used in policy conditions—we
now have support for identity-provider–specific keys for Amazon.com, Facebook,
and Google. In this case, the www.amazon.com:app_id key ensures that
the request is coming from your app by doing a string comparison against your app’s
registered ID. (To read more about the new policy keys that support web
identity federation, see Creating
Temporary Security Credentials for Mobile Apps Using Public Identity Providers
in the AWS STS guide.)

Next, let’s create an access policy for the role.

Sample Policy Allowing S3 Access

The following access policy grants end users of your app limited access to
an S3 bucket named myBucket in your AWS account. Specifically, it grants
every user read-only access (Get*) to a shared folder whose prefix matches
the app_id
value, and it grants read, write, and delete access (GetObject, PutObject,
and DeleteObject)
to a folder whose prefix matches the user ID provided by Amazon. Notice that
the new identity provider keys described in the previous section, such as ${www.amazon.com:app_id},
can also be used as AWS
access control policy variables. In this case, we’re also using ${www.amazon.com:user_id},
which contains the user’s unique Amazon.com ID. (Remember, when using a
variable you must always set the Version element to 2012-10-17.)

Putting it All TogetherNow that the role has been created, let’s take a look at how the role will
be used to provide federated access to your end users. The following diagram
illustrates the steps that occur to authenticate an Amazon.com user and
authorize the user to access objects in an S3 bucket.

First, the user needs to be authenticated. Using the Login with
Amazon SDK, the app authenticates the user and receives a token from
Amazon.

Next, the user needs to be authorized to access resources in your AWS account. The app makes an unsigned AssumeRoleWithWebIdentity
request to STS, passing the token from the previous step. STS verifies
the authenticity of the token; if the token is valid, STS returns a set
of temporary security credentials to the app. By default, the
credentials can be used for one hour.

Finally, the app uses the temporary security credentials to make
signed requests for resources in your S3 bucket. Because the role’s
access policy used variables that reference the app ID and the user ID,
the temporary security credentials are scoped to that end user and will
prevent him or her from accessing objects owned by other users.

This is just one example of how web identity federation can be used. A
similar flow can also be enabled for Facebook or Google identities by integrating
their SDKs with your app and simply creating an additional role. For more information
about web identity federation and how to create roles that support other identity
providers, please see Creating
Temporary Security Credentials for Mobile Apps Using Public Identity Providers
in the AWS STS guide.

When you travel as much as I do, you tend to think about getting
sick or injured while away from home. I don’t worry too much about the
quality of the care that’s available in the developed world, but I do
worry about being able to explain my symptoms and to understand the
diagnosis and the preferred treatment.

The ProblemRyan Frankel
of VerbalizeIt found himself in this very predicament while traveling
in China a few years ago. He was violently ill and was not able to
communicate with the pharmacist to understand what he had been
prescribed. After he regained his health, he realized that there was an
opportunity to use technology to connect anyone in need of translation
services with a worldwide network of multilingual people. He
investigated what was available and saw that there was market gap. On
the one side, machine translations were quick and often free, but not
especially accurate. On the other side, dedicated call centers provided
accurate results at a high cost.

The SolutionRyan decided to fill in the gap and created VerbalizeIt,
a human-powered translation service, with the goal of providing call
center quality at prices more often associated with machine translation.
He wanted to create a Software as a Service (Saas) platform and a
market that would allow people to monetize their translation skills.

He expanded his team and they took part in TechStars.
Things started taking off, they rebuilt their platform on top of AWS to
increase scalability, and then applied and were selected to appear on
the season finale of ABC’s hit show Shark Tank. The episode was shown on May 17th, during the all-important “sweeps week,” to an estimated 7 to 10 million viewers.

Ready to ScaleThe
team at VerbalizeIt knew that they had a one-time opportunity to make
their service known to millions of TV viewers. The web site had to
scale, not just once, but in each time zone and then again for re-runs. A
team of AWS Solutions Architects reviewed the design and the
implementation to make sure that it made efficient use of EC2, S3, and CloudFront.

The VerbalizeIt also worked to scale the human side, making sure that
they had access to enough qualified interpreters to efficiently handle a
large increase in demand.

I asked Ryan and his team at VerbalizeIt to capture some metrics and
some graphs to give a behind the scenes look at what happens when your
web site is featured on prime time television. Here’s what they saw:

VerbalizeIt experienced nearly a 7x increase in unique visits, a 7x
increase in page visits and a more than 6x increase in page views.

VerbalizeIt for DevelopersDuring my conversation with Ryan I learned that VerbalizeIt is also a platform and that it has APIs for developers! The API is simple and effective. After you get an API key,
you can call APIs for voice and text translation. The voice translation
API facilitates a phone call to a suitable translator. You start by
calling the languages function. It returns a list of language names and the associated codes. Then you call the live_translation_requests
function with the desired pair of languages and a phone number. The
function will return another phone number, which will be expecting a
call from the one passed to the function.