As of April 21, 2014, you will no longer be able to retrieve the existing secret access key(s) for your AWS (root) account.

If you have become dependent on this feature, you should download your key from the legacy security credentials page now and then save it in a safe and secure location. Better yet, follow our best practices and create an IAM user with access keys. The legacy security credential page will warn you about the upcoming change:

Shon Shah, Senior Product Manager on the AWS Identity and Access Management (IAM) team, sent along a guest post announcing new IAM functionality that enables you to enforce multi-factor authentication (MFA) when providing programmatic access across AWS accounts.

-- Jeff;

IAM roles enable you to grant an IAM user in one AWS account access to resources in a different account (i.e., cross-account access). Roles provide a secure and controllable mechanism as you don’t have to share AWS security credentials (secret access keys) and you can revoke the access at any time.

MFA is a security best practice that adds an extra layer of protection to your AWS account. It requires users to present two independent credentials: what the user knows (password or secret access key) and what the user has (MFA device). IAM already supports adding MFA protection when you grant access to users within a single AWS account.

Today, we are announcing the ability to add MFA protection for access across AWS accounts.

Let’s take a closer look at how you might use this. In our earlier blog post, we looked at a scenario where your company had two AWS accounts. A main account where you created most of your users and a research account that stored data from several research projects. We showed how you can create a role in the research account that can be assumed by a user Joe in the main account. This enabled Joe to access Amazon DynamoDB tables in the research account even though he was not a user in the research account. But what if the data is particularly sensitive and the admin of the research account wants to add an extra layer of protection? The admin can accomplish this by using the new MFA protection, which will require Joe to use MFA before assuming the role. For the admin, it is as simple as selecting the Require MFA checkbox when creating a role in the AWS Management Console, as shown in the picture below. This ensures that only MFA-authenticated users can assume the role.

This example shows how to add MFA protection for access between AWS accounts owned by the same company. But the feature works the same for access between accounts owned by different companies.

Another benefit of the new feature is that it enables you to add MFA protection for IAM actions like creating users, changing passwords, modifying password policies etc. Previously, it was possible to add MFA protection for AWS actions other than IAM actions. Now you can add it for IAM actions too. Imagine you hired a security consultant for a month to perform penetration testing of your website. You could create an IAM user for him in your AWS account, which enables him to perform actions like launching Amazon EC2 instances, uploading logs to Amazon S3 bucket etc. Before he leaves, imagine you would like him to check the IAM configuration of your AWS account. However, you would like to ensure additional level of protection due to the privileged nature of the IAM actions. You could accomplish this by creating a role that permits IAM actions but requires using MFA before assuming the role. This way you get the extra layer of protection for privileged actions in your AWS account. You can use such roles to MFA-protect IAM actions between accounts, not just within a single account.

My colleague Chris Barclay reports on an important new feature for AWS OpsWorks!

-- Jeff;

I am pleased to announce that AWS OpsWorks now supports resource-level permissions. AWS OpsWorks is an application management service that lets you provision resources, deploy and update software, automate common operational tasks, and monitor the state of your environment. You can optionally use the popular Chef automation platform to extend OpsWorks using your own custom recipes.

With resource-level permissions you can now:

Grant users access to specific stacks, making management of multi-user environments easier. For example, you can give a user access to the staging and production stacks but not the secret stack.

Set user-specific permissions for actions on each stack, allowing you to decide who can deploy new application versions or create new resources on a per-stack basis for example.

Delegate management of each OpsWorks stack to a specific user or set of users.

Control user-level SSH access to Amazon EC2 instances, allowing you to instantly grant or remove access to instances for individual users.

A simple user interface in OpsWorks lets you select a policy for each user on each stack depending on the level of control needed:

Deny blocks the user’s access to this stack.

IAM Policies Only bases a user’s permissions exclusively on policies attached to the user in IAM.

Show combines the user’s IAM policies with permissions that provide read-only access to the stack’s resources.

Deploy combines the user’s IAM policies with Show permissions and permissions that let the user deploy new application versions.

Manage combines the user’s IAM policies with permissions that provide full control of this stack.

These policies make it easy to quickly configure a user with the right permissions for the tasks they need to accomplish. You can also create a custom IAM policy to fine-tune their permissions.

Let’s see this in action. In this example we have two users defined in the IAM console: Chris and Mary.

Now go to the OpsWorks console, open the Users view, and import Chris and Mary.

You can then go to a stack and open the Permissions view to designate OpsWorks permissions for each user. Give Mary Manage permissions for the MyWebService stack. Chris should already be able to access this stack because you attached the AWS OpsWorks Full Access policy to his user in IAM.

To remove Chris’ access to this stack, simply select the Deny radio button next to Chris. Your Permissions view will now look like this:

Chris can no longer view or access the stack because the explicit Deny overrides his user’s AWS OpsWorks Full Access policy. Mary can still access the stack and she can create, manage and delete resources.

But assume that this is a production stack and you don’t want Mary to be able to stop instances. To make sure she can’t do that, you can create a custom policy in the IAM console.

Go to the IAM console, select the user Mary and then click Attach User Policy. Add this custom policy:

What does this policy mean? Here are the pieces:

"Action": "opsworks:Stop*" means this applies to any OpsWorks API action that begins with Stop.

"Effect": "Deny" tells OpsWorks to deny that action request.

"Resource": "arn:aws:opsworks:*:*:stack/2860c9c8-9d12-4cc1-aed1-xxxxxxxx" specifies that this statement applies only to resources in the specified stack. If Mary was using other stacks, this policy would not apply and she could perform stop actions if she had Manage access.

After applying the policy, you can return to the OpsWorks console and view a running instance as user Mary.

You can see that Mary can no longer stop this instance.

Behind the scenes, a user’s OpsWorks and IAM permissions are merged and then evaluated to determine whether a request is allowed or denied. In Mary’s case, the Manage user policy that you applied in OpsWorks allows her to stop instances in that stack. However, the explicit Deny on Stop* actions in her IAM user policy overrides the Allow in her OpsWorks policy. To learn more about policy evaluation, see the IAM documentation.

Once the user’s action has been evaluated, OpsWorks carries out the request. The user doesn’t actually need permissions to use underlying services such as Amazon EC2 – you give those permissions to the OpsWorks service role. This gives you control over how resources are administered without requiring you to manage user permissions to each of the underlying services. For example, a policy in IAM might deny Mary the ability to create instances within EC2 (either explicitly, or by simply not giving her explicit permission to do so). But Mary’s OpsWorks Manage policy allows her to create instances within an OpsWorks stack. Since you can define the region and VPC that each stack uses, this can help you comply with organizational rules on where instances can be launched.

Resource-level permissions give you control and flexibility for how to manage your applications. Try it and let us know what you think! For more information, please see the OpsWorks documentation.

Derek Lyon sent me a really nice guest post to introduce an important new EC2 feature!

-- Jeff;

I am happy to announce that Amazon EC2 now supports resource-level permissions for the RunInstances API. This release enables you to set fine-grained controls over the AMIs, Snapshots, Subnets, and other resources that can be used when creating instances and the types of instances and volumes that users can create when using the RunInstances API.

This release is part of a larger series of releases enabling resource-level permissions for Amazon EC2, so let’s start by taking a step back and looking at some of the features that we already support.

EC2 Resource-Level Permission So Far In July, we announced the availability of Resource-level Permissions for Amazon EC2. Using the initial set of APIs along with resource-level permissions, you could control which users where allowed to do things like start, stop, reboot, and terminate specific instances, or attach, detach or delete specific volumes.

Since then, we have continued to add support for additional APIs, bringing the total up to 19 EC2 APIs that currently support resource-level permissions, prior to today's release. The additional functionality that we have added allows you to control things like which users can modify or delete specific Security Groups, Route Tables, Network ACLs, Internet Gateways, Customer Gateways, or DHCP Options Sets.

We also provided the ability to set permissions based on the tags associated with resources. This in turn enabled you to construct policies that would, for example, allow a user the ability to modify resources with the tag “environment=development” on them, but not resources with the tag “environment=production” on them.

We have also provided a series of debugging tools, which enable you to test policies by making “DryRun” API calls and to view additional information about authorization errors using a new STS API, DecodeAuthorizationMessage.

Resource-level Permissions for RunInstances Using EC2 Resource-level Permissions for RunInstances, you now have the ability to control both which resources can be referenced and used by a call to RunInstances, and which resources can be created as part of a call to RunInstances. This enables you to control the use of the following types of items:

The AMI used to run the instance

The Subnet and VPC where the instance will be located

The Availability Zone and Region where the instance and other resources will be created

Any Snapshots used to create additional volumes

The types of instances that can be created

The types and sizes of any EBS volumes created

You can now use resource-level permissions to limit which AMIs a user is permitted to use when running instances. In most cases, you will want to start by tagging the AMIs that you want to whitelist for your users with an appropriate tag, such as “whitelist=true.” (As part of the whitelisting process, you will also want to limit which users have permission to the tagging APIs, otherwise the user can add or remove this tag.) Next, you can construct an IAM policy for the user that only allows them to use an AMI for running instances if it has your whitelist tag on it. This policy might look like this:

If you want to set truly fine-grained permissions, you can construct policies that combine these elements. This enables you to set fine-grained policies that do things like allow a user to run only m3.xlarge instances in a certain Subnet (i.e. subnet-1a2b3c4d), using a particular Image (i.e. ami-5a6b7c8d) and a certain Security Group (i.e. sg-11a22b33). The applications for these types of policies are far-reaching and we are excited to see what you do with them.

Because permissions are applied at the API level, any users that the IAM policy is applied to will be restricted by the policy you set, including users who run instances using the AWS Management Console, the AWS CLI, or AWS SDKs.

You can find a complete list of the resource types that you can write policies for in the Permissions section of the EC2 API Reference. You can also find a series of sample policies and use cases in the IAM Policies section of the EC2 User Guide.

Today, we’re excited to announce we’ve expanded our identity federation to include support for SAML 2.0, an open industry standard used by many identity providers. This new feature enables federated SSO, empowering users to sign into the AWS Management Console or make programmatic calls to AWS APIs, by using assertions from a SAML-compliant identity provider (IdP).

Identity federation makes it easier for you to manage your users by enabling you to maintain your identities within your existing directory. SAML-based federation makes it simple for you to configure federation with AWS because you can use any IdP software that supports SAML (e.g., Windows Active Directory Federation Services or Shibboleth). Using federation, if a user leaves your company, you can simply delete the user's corporate identity in one place, which then also revokes access to AWS. Your users also benefits because they only need to remember one username and password. Have I got your attention yet?

The Federated SSO ExperienceLet’s walk through a use case that demonstrates how federated SSO works. Imagine you have users that are members of a Windows Active Directory (AD) domain. You want these users to be able to sign into the AWS Management Console, but you don’t want to create IAM users for each of them. Instead, you want them to browse to an internal portal, be authenticated by AD, and then be redirected via SSO to the AWS Management Console. (If you’re familiar with SAML terminology, you’ll recognize this as the IdP-initiated WebSSO use case.) In this use case, your company is the IdP which authenticates users and passes assertions to AWS, the service provider, that accepts those assertions and provides SSO to the console.

To support this use case we’ve introduced a few new things: SAML providers, the AssumeRolewithSAML API, and an additional sign-in endpoint (https://signin.aws.amazon.com/SAML). A SAML provider is a new IAM entity that defines a principal for one or more organizations that you would like to establish trust with your AWS account. You create a SAML provider by uploading a standard SAML metadata document using the AWS Management Console, AWS CLI, or the IAM API. The new API, AssumeRoleWithSAML allows you to request temporary security credentials from the Security Token Service (STS) by assuming an IAM role. Unlike the existing AssumeRole API, you don’t need to sign the request using AWS credentials (i.e., Access Key ID and Secret Access Key). Instead you simply pass the SAML assertion along with the AssumeRoleWithSAML request. Finally, the new sign-in endpoint enables users to log into the AWS Management Console using a SAML assertion.

Let’s explore how users go from signing into their Windows laptops to
logging into the AWS Management Console.

A user in your organization browses to an internal portal in your network. The portal also functions as the IdP which handles the SAML trust between your organization and AWS.

The IdP authenticates the user’s identity against AD.

The client receives a SAML assertion (in the form of an authentication response) from the IdP.

The client posts the SAML assertion to the new AWS sign-in endpoint. Behind the scenes, sign-in uses the AssumeRoleWithSAML API to request temporary security credentials and construct a sign-in URL.

The user's browser receives the sign-in URL and is redirected to the AWS Management Console.

From the user's perspective, the process happens transparently—the user starts at your organization's internal portal and ends up at the AWS Management Console, without ever having to supply any AWS credentials.

So, what do you need to set this up? Here are basic steps:

Generate a metadata document using your IdP software.

Create a SAML provider using that metadata document.

Create one or more IAM roles that establish trust with the SAML provider and define permissions for the federated user.

AWS Identity and Access Management (IAM for short) lets you control access to AWS services and resources using access control policies. IAM includes a large collection of prebuilt policies, and you can also create your own.

IAM policies are comprised of policy statements. Each statement either allows or denies access to some AWS services (at the level of individual API functions) or resources. Policies can be attached to users, groups, or roles.

The following sample policy allows access to all EC2 APIs and resources:

New Policy SimulatorThe policy language is rich and expressive and we want to make it
even easier for you to use. Until now you had to apply policies in
production in order to make sure that they behave as expected.

Today we are introducing the IAM Policy Simulator tool. Using this tool you can now test the effects of your IAM policies before you commit them to production. You simply choose the policy that you want to evaluate, select from a list of AWS options, and click the Run Simulation button.

Let's say that you are the AWS account owner and you want to make sure that I (represented by IAM user jeff) have access to all of the EC2 APIs. You select my name, the service, and the functions that I need to be able to access (you can also use the Select All button):

The policy will be evaluated when you push the Run Simulation button and the simulator will display the results. It looks like I don't have access to the EC2 APIs (this is because IAM users have no permissions unless they are explicitly granted):

I need to have access, so you visit the IAM tab of the AWS Management Console and attach the Amazon EC2 Full Access Policy to user jeff:

Then you return to the simulator and run the simulation again. This time I have access:

This is just a taste of what you can do with the IAM Policy Simulator. You can choose to exclude policies from the simulation so that you can see what happens if it is removed. You can simulate access to specific resources, and you can create and test newly generated policies within the simulator.

My colleague Chetan Dandekar brings word of a powerful enhancement to AWS CloudFormation that will make it an even better fit for large-scale corporate deployments.

-- Jeff;

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of AWS resources. Today, we added support for the CloudFormation APIs to be called using temporary security credentials provided by the AWS Security Token Service.

This enables a number of scenarios such as federated users using CloudFormation and authorizing Amazon EC2 instances with IAM roles to call CloudFormation. Before this launch, calling CloudFormation required use of an IAM user or AWS account credentials.

AWS
supports federated
user access to AWS service APIs and resources. Federated users are managed
in an external directory and are granted temporary access AWS services. You now
have the option of authorizing federated users to call AWS CloudFormation APIs,
as an alternative to creating
IAM users to use CloudFormation. A federated user can also sign in and
manage CloudFormation stacks from the AWS Management Console (if interested,
here is a sample
proxy server that demonstrates setting this up).

Consider
an example where you have a 100 person IT department. The department would
likely have specialists such as network architects, database admins, and application
developers. Since CloudFormation enables you to model and provision
infrastructure as well as applications, many of those specialists would need
access to CloudFormation. Now, the IT department does not have to create an IAM
user for each of those employees in order to access CloudFormation. You can
choose to authorize existing federated users to use CloudFormation. Also, the
IT department can fine tune access, for instance, by authorizing the database
admins to call CloudFormation and Amazon RDS, while authorizing the application
developers to call CloudFormation, and EC2 or AWS Elastic Beanstalk.
Furthermore, when members join or leave the IT department, you do not need to
add or remove corresponding IAM users. The following diagram shows the flow of
a federated user calling CloudFormation:

IAM
roles enable easy and secure access to AWS APIs from EC2 instances, without
sharing the long-term security credentials (i.e., access keys). The
CloudFormation’s Describe* APIs can already be called using temporary security
credentials generated by assuming an IAM role. We now support the complete set
of CloudFormation APIs.

Consider
an example where you have a regulatory requirement to perform all AWS resource
provisioning and management operations from within an Amazon VPC. One approach to comply would
be to provision an EC2 instance inside a VPC, which in turn calls the CloudFormation
service to provision and manage infrastructure and applications using your
CloudFormation templates. You can now use an IAM role to define permissions
that allow calling CloudFormation, and delegate those permissions to the EC2
instance. IAM roles use temporary security credentials and take care of
rotating credentials on the instance for you. Here is a visualization of this
scenario:

To
learn more about the AWS CloudFormation, visit the CloudFormation detail page, documentation or
watch this introductory video.
We also have a large collection of sample
templates that makes it easy to get started with CloudFormation within
minutes.

One of the most powerful features of AWS Identity and Access Management (IAM) is its ability to issue temporary security credentials and grant controlled access to people in a network without having to define individual identities for each user (i.e., identity federation). This enables customers to extend their existing authentication systems and allow users to Single Sign-On (SSO) to the AWS Management Console.

Last November, we releasedsample code that will allow customers to create a federation proxy server that uses IAM roles to create temporary security credentials which can be used by Windows Active Directory users to Single Sign-On (SSO) to the AWS Management Console. Thousands of universities and government institutions currently use Shibboleth as their SSO authentication system across many disparate systems. We’ve received feedback from these customers who want a sample demonstrating how to leverage existing Shibboleth systems to easily enable SSO to the AWS Management Console.

The sample code empowers system architects and admins to configure Shibboleth and IAM so users can leverage AWS services while still managing the user’s credentials in their local directory. The sample allows federated users to log into the AWS Management Console without having to create individual IAM users. This approach of federating the use of AWS is a great way to expand and extend your organization’s ability to securely access AWS resources.

Consider the following example. If a professor of a university would like to have her students use AWS for a class assignment, instead of creating an IAM user for every student, she can leverage the sample proxy to grant students access via their Shibboleth credentials. Implementing SSO with Shibboleth means that students continue to use the same set of credentials they commonly access other university systems with, while ensuring the username and password is never shared with untrusted systems.

Here’s how it works:

User A browses to the proxy URL and is prompted to login with Shibboleth credentials

Once the user’s credentials are validated, all IAM roles that match assertions are listed in a drop-down box.

The user selects the IAM role that he would like to use and then clicks “Sign in to the AWS Console”.

The proxy then retrieves the necessary information from the SAML token and then calls the AssumeRoleRequest API. Using the temporary security credentials received in the AssumeRoleResponse, the proxy server is able to construct a temporary sign-in URL which is used to redirect the user to the AWS Management Console.

Getting Started

The Step-by-Step instructions in the article will help you get started quickly and walks through the process of installing the sample code, creating the federation partnership, configuring roles in AWS IAM, and deploying the sample proxy.

We would love to get your feedback on whether this sample code is useful to you or not and how we can improve the federation proxy functionality even further. We can’t wait to hear from you. Please provide your comments below.

In order to help you to learn more about how this feature works and to make it easier for you to test and debug your applications and websites that make use of it, we have launched the Web Identify Federation Playground:

You can use the playground to authenticate yourself to any of the identity providers listed above, see the requests and responses, obtain a set of temporary security credentials, and make calls to the Amazon S3 API using the credentials:

The AWS Security Blog just published a step-by-step walkthrough to show you how the playground can help you to learn more about IAM and identity federation.

With AWS being put to use in an ever-widening set of use cases across
organizations of all shapes and sizes, the need for additional control
over the permissions granted to users and to applications has come
through loud and clear. This need for control becomes especially pronounced at the enterprise level. You don't want the developers who are building cloud applications to have the right to make any changes to the cloud resources used by production systems, and you don't want the operators of one production system to have access to the cloud resources used by another.

The Story So FarWith the launch of IAM in the middle of 2010, we gave you the ability to create and apply policies that control which users within your organization were able to access AWS APIs.

Later, we gave you the ability to use policies to control access to individual DynamoDB, Elastic Beanstalk, Glacier, Route 53, SNS, SQS, S3, SimpleDB, and Storage Gateway resources.

Today we are making IAM even more powerful with the introduction of resource-level permissions for Amazon EC2 and Amazon RDS. This feature is available for the RDS MySQL, RDS Oracle, and RDS SQL Server engines.

On the EC2 side, you can now construct and use IAM policies to control access to EC2 instances, EBS volumes, images, and Elastic IP addresses. On the RDS side, you can use similar policies to control access to DB instances, Read replicas, DB Event subscriptions, DB option groups, DB parameter groups, DB security groups, DB snapshots, and subnet groups.

Let's take a closer look!

Resource-Level Permissions for EC2You can now use IAM policies to support a number of important EC2 use cases. Here are just a few of things that you can do:

Allow users to act on a limited set of resources within a larger, multi-user EC2 environment.

Set different permissions for "development" and "test" resources.

Control which users can terminate which instances.

Require additional security measures, such as MFA authentication, when acting on certain resources.

This is a complex and far-reaching feature and we'll be rolling it out in stages. In the first stage, the following actions on the indicated resources now support resource-level permissions:

Instances - Reboot, Start, Stop, Terminate.

EBS Volumes - Attach, Delete, Detach.

EC2 actions not listed above will not be governed by resource-level permissions at this time. We plan to add support for additional APIs throughout the rest of 2013.

We are also launching specific and wild-card ARNs (Amazon Resource Names) for all EC2 resources. You can refer to individual resources using ARNs such as arn:aws:ec2:us-east-1:1234567890:instance/i-i-96d811fe and groups of resources using ARNs of the form arn:aws:ec2:us-east-1:1234567890:instance/*.

EC2 policy statements can include reference to tags on EC2 resources. This gives you the power to use the same tagging model and schema for permissions and for billing reports.

In order to make it easier for you to test and verify the efficacy of your policies, we are extending the EC2 API with a new flag and a couple of new functions. The new flag is the “DryRun” flag, available as a new general option for the EC2 APIs. If you specify this flag the API request will perform an authorization determination, but will not actually process the API request (for example, to determine whether a user has permission to terminate an instance without actually terminating the instance).

In addition, when using API version 2013-06-15 and above, you will now receive encoded authorization messages along with authorization denied errors that can be used in combination with a new STS API, DecodeAuthorizationMessage, to learn more about the IAM policies and evaluation context that led to a particular authorization determination (permissions to the new STS API can be controlled using an IAM policy for the sts:DecodeAuthorizationMessage action).

The final piece of the EC2 side of this new release is an expanded set of condition tags that you can use in your policies. You can reference a number of aspects of each request including ec2:Region, ec2:Owner, and ec2:InstanceType (consult the EC2 documentation for a
complete list of condition tags).

Resource Permissions for RDSYou can also use policies to support a set of important RDS use cases. Here's a sampling:

Implement DB engine and Instance usage policies to
specific group of users. For example, you may limit the usage of the m2.4xl
instance type and Provisioned IOPS to users in the “Staging users” or
“Production users” groups.

Permit a user to create a DB instance that uses
specific DB parameter groups and security groups. For example, you may
restrict “Web application developers” to use DB instances with “Web
application” parameter groups and “Web DB Security” groups. These groups
may contain specific DB options and security group settings you have
configured.

Restrict a user or user group from using a specific
parameter group to create a DB instance. For example, you may prevent
members of the “Test users” group from using “Production” parameter groups
while creating test DB instances.

Allow only specific users to operate DB instances that
have a specific tag (e.g. DB instances that are tagged as
"production"). For example, you may enable only “Production
DBAs” to have access to DB instances tagged with “Production” label.

As you might have guessed from my final example, you can now make references to tags on any of the RDS resources (see my other post
for information on the newly expanded RDS tagging facility), and you
can use the same tags and tagging schema for billing purposes.

As we did for EC2, we have added a set of RDS condition tags for additional flexibility. You can reference values such as rds:DatabaseClass, rds:DatabaseEngine, and rds:Piops in your policies.