In a previous AWS Security Blog post, Jeff Levine showed how you can monitor changes to your Amazon EC2 security groups. The methods he describes in that post are examples of detective controls, which can help you determine when changes are made to security controls on your AWS resources.

In this post, I take that approach a step further by introducing an example of a responsive control, which you can use to automatically respond to a detected security event by applying a chosen security mitigation. I demonstrate a solution that continuously monitors changes made to an Amazon VPCsecurity group, and if a new ingress rule (the same as an inbound rule) is added to that security group, the solution removes the rule and then sends you a notification after the changes have been automatically reverted.

The scenario

Let’s say you want to reduce your infrastructure complexity by replacing your Secure Shell (SSH) bastion hosts with Amazon EC2 Systems Manager (SSM). SSM allows you to run commands on your hosts remotely, removing the need to manage bastion hosts or rely on SSH to execute commands. To support this objective, you must prevent your staff members from opening SSH ports to your web server’s Amazon VPC security group. If one of your staff members does modify the VPC security group to allow SSH access, you want the change to be automatically reverted and then receive a notification that the change to the security group was automatically reverted. If you are not yet familiar with security groups, see Security Groups for Your VPC before reading the rest of this post.

Solution overview

This solution begins with a directivecontrol to mandate that no web server should be accessible using SSH. The directive control is enforced using a preventivecontrol, which is implemented using a security group rule that prevents ingress from port 22 (typically used for SSH). The detective control is a “listener” that identifies any changes made to your security group. Finally, the responsive control reverts changes made to the security group and then sends a notification of this security mitigation.

The detective control, in this case, is an Amazon CloudWatch event that detects changes to your security group and triggers the responsive control, which in this case is an AWS Lambda function. I use AWS CloudFormation to simplify the deployment.

The following diagram shows the architecture of this solution.

Here is how the process works:

Someone on your staff adds a new ingress rule to your security group.

A CloudWatch event that continually monitors changes to your security groups detects the new ingress rule and invokes a designated Lambda function (with Lambda, you can run code without provisioning or managing servers).

The Lambda function evaluates the event to determine whether you are monitoring this security group and reverts the new security group ingress rule.

Finally, the Lambda function sends you an email to let you know what the change was, who made it, and that the change was reverted.

Deploy the solution by using CloudFormation

In this section, you will click the Launch Stack button shown below to launch the CloudFormation stack and deploy the solution.

Prerequisites

You must have AWS CloudTrail already enabled in the AWS Region where you will be deploying the solution. CloudTrail lets you log, continuously monitor, and retain events related to API calls across your AWS infrastructure. See Getting Started with CloudTrail for more information.

You must have a default VPC in the region in which you will be deploying the solution. AWS accounts have one default VPC per AWS Region. If you’ve deleted your VPC, see Creating a Default VPC to recreate it.

Resources that this solution creates

When you launch the CloudFormation stack, it creates the following resources:

A sample VPC security group in your default VPC, which is used as the target for reverting ingress rule changes.

An Amazon SNS topic to which the Lambda function publishes notifications.

Launch the CloudFormation stack

The link in this section uses the us-east-1 Region (the US East [N. Virginia] Region). Change the region if you want to use this solution in a different region. See Selecting a Region for more information about changing the region.

To deploy the solution, click the following Launch Stack button to launch the stack. After you click the button, you must sign in to the AWS Management Console if you have not already done so.

Then:

Choose Next to proceed to the Specify Details page.

On the Specify Details page, type your email address in the Send notifications to box. This is the email address to which change notifications will be sent. (After the stack is launched, you will receive a confirmation email that you must accept before you can receive notifications.)

Choose Next until you get to the Review page, and then choose the I acknowledge that AWS CloudFormation might create IAM resources check box. This confirms that you are aware that the CloudFormation template includes an IAM resource.

Choose Create. CloudFormation displays the stack status, CREATE_COMPLETE, when the stack has launched completely, which should take less than two minutes.

Testing the solution

Check your email for the SNS confirmation email. You must confirm this subscription to receive future notification emails. If you don’t confirm the subscription, your security group ingress rules still will be automatically reverted, but you will not receive notification emails.

Navigate to the EC2 console and choose Security Groups in the navigation pane.

Choose the security group created by CloudFormation. Its name is Web Server Security Group.

Choose the Inbound tab in the bottom pane of the page. Note that only one rule allows HTTPS ingress on port 443 from 0.0.0.0/0 (from anywhere).

Choose Edit to display the Edit inbound rules dialog box (again, an inbound rule and an ingress rule are the same thing).

Choose Add Rule.

Choose SSH from the Type drop-down list.

Choose My IP from the Source drop-down list. Your IP address is populated for you. By adding this rule, you are simulating one of your staff members violating your organization’s policy (in this blog post’s hypothetical example) against allowing SSH access to your EC2 servers. You are testing the solution created when you launched the CloudFormation stack in the previous section. The solution should remove this newly created SSH rule automatically.

Choose Save.

Adding this rule creates an EC2 AuthorizeSecurityGroupIngress service event, which triggers the Lambda function created in the CloudFormation stack. After a few moments, choose the refresh button ( ) to see that the new SSH ingress rule that you just created has been removed by the solution you deployed earlier with the CloudFormation stack. If the rule is still there, wait a few more moments and choose the refresh button again.

You should also receive an email to notify you that the ingress rule was added and subsequently reverted.

Cleaning up

If you want to remove the resources created by this CloudFormation stack, you can delete the CloudFormation stack:

CloudFormation will display a status of DELETE_IN_PROGRESS while it deletes the resources created with the stack. After a few moments, the stack should no longer appear in the list of completed stacks.

Other applications of this solution

I have shown one way to use multiple AWS services to help continuously ensure that your security controls haven’t deviated from your security baseline. However, you also could use the CIS Amazon Web Services Foundations Benchmarks, for example, to establish a governance baseline across your AWS accounts and then use the principles in this blog post to automatically mitigate changes to that baseline.

To scale this solution, you can create a framework that uses resource tags to identify particular resources for monitoring. You also can use a consolidated monitoring approach by using cross-account event delivery. See Sending and Receiving Events Between AWS Accounts for more information. You also can extend the principle of automatic mitigation to detect and revert changes to other resources such as IAM policies and Amazon S3 bucket policies.

Summary

In this blog post, I demonstrated how you can automatically revert changes to a VPC security group and have a notification sent about the changes. You can use this solution in your own AWS accounts to enforce your security requirements continuously.

If you have comments about this blog post or other ideas for ways to use this solution, submit a comment in the “Comments” section below. If you have implementation questions, start a new thread in the EC2 forum or contact AWS Support.

Starting today, you can encrypt the Lightweight Directory Access Protocol (LDAP) communications between your applications and AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD. Many Windows and Linux applications use Active Directory’s (AD) LDAP service to read and write sensitive information about users and devices, including personally identifiable information (PII). Now, you can encrypt your AWS Microsoft AD LDAP communications end to end to protect this information by using LDAP Over Secure Sockets Layer (SSL)/Transport Layer Security (TLS), also called LDAPS. This helps you protect PII and other sensitive information exchanged with AWS Microsoft AD over untrusted networks.

Solution overview

Before going into specific deployment steps, I will provide a high-level overview of deploying LDAPS. I cover how you enable LDAPS on AWS Microsoft AD. In addition, I provide some general background about CA deployment models and explain how to apply these models when deploying Microsoft CA to enable LDAPS on AWS Microsoft AD.

How you enable LDAPS on AWS Microsoft AD

LDAP-aware applications (LDAP clients) typically access LDAP servers using Transmission Control Protocol (TCP) on port 389. By default, LDAP communications on port 389 are unencrypted. However, many LDAP clients use one of two standards to encrypt LDAP communications: LDAP over SSL on port 636, and LDAP with StartTLS on port 389. If an LDAP client uses port 636, the LDAP server encrypts all traffic unconditionally with SSL. If an LDAP client issues a StartTLS command when setting up the LDAP session on port 389, the LDAP server encrypts all traffic to that client with TLS. AWS Microsoft AD now supports both encryption standards when you enable LDAPS on your AWS Microsoft AD domain controllers.

You enable LDAPS on your AWS Microsoft AD domain controllers by installing a digital certificate that a CA issued. Though Windows servers have different methods for installing certificates, LDAPS with AWS Microsoft AD requires you to add a Microsoft CA to your AWS Microsoft AD domain and deploy the certificate through autoenrollment from the Microsoft CA. The installed certificate enables the LDAP service running on domain controllers to listen for and negotiate LDAP encryption on port 636 (LDAP over SSL) and port 389 (LDAP with StartTLS).

Background of CA deployment models

You can deploy CAs as part of a single-level or multi-level CA hierarchy. In a single-level hierarchy, all certificates come from the root of the hierarchy. In a multi-level hierarchy, you organize a collection of CAs in a hierarchy and the certificates sent to computers and users come from subordinate CAs in the hierarchy (not the root).

Certificates issued by a CA identify the hierarchy to which the CA belongs. When a computer sends its certificate to another computer for verification, the receiving computer must have the public certificate from the CAs in the same hierarchy as the sender. If the CA that issued the certificate is part of a single-level hierarchy, the receiver must obtain the public certificate of the CA that issued the certificate. If the CA that issued the certificate is part of a multi-level hierarchy, the receiver can obtain a public certificate for all the CAs that are in the same hierarchy as the CA that issued the certificate. If the receiver can verify that the certificate came from a CA that is in the hierarchy of the receiver’s “trusted” public CA certificates, the receiver trusts the sender. Otherwise, the receiver rejects the sender.

Deploying Microsoft CA to enable LDAPS on AWS Microsoft AD

Microsoft offers a standalone CA and an enterprise CA. Though you can configure either as single-level or multi-level hierarchies, only the enterprise CA integrates with AD and offers autoenrollment for certificate deployment. Because you cannot sign in to run commands on your AWS Microsoft AD domain controllers, an automatic certificate enrollment model is required. Therefore, AWS Microsoft AD requires the certificate to come from a Microsoft enterprise CA that you configure to work in your AD domain. When you install the Microsoft enterprise CA, you can configure it to be part of a single-level hierarchy or a multi-level hierarchy. As a best practice, AWS recommends a multi-level Microsoft CA trust hierarchy consisting of a root CA and a subordinate CA. I cover only a multi-level hierarchy in this post.

In a multi-level hierarchy, you configure your subordinate CA by importing a certificate from the root CA. You must issue a certificate from the root CA such that the certificate gives your subordinate CA the right to issue certificates on behalf of the root. This makes your subordinate CA part of the root CA hierarchy. You also deploy the root CA’s public certificate on all of your computers, which tells all your computers to trust certificates that your root CA issues and to trust certificates from any authorized subordinate CA.

In such a hierarchy, you typically leave your root CA offline (inaccessible to other computers in the network) to protect the root of your hierarchy. You leave the subordinate CA online so that it can issue certificates on behalf of the root CA. This multi-level hierarchy increases security because if someone compromises your subordinate CA, you can revoke all certificates it issued and set up a new subordinate CA from your offline root CA. To learn more about setting up a secure CA hierarchy, see Securing PKI: Planning a CA Hierarchy.

When a Microsoft CA is part of your AD domain, you can configure certificate templates that you publish. These templates become visible to client computers through AD. If a client’s profile matches a template, the client requests a certificate from the Microsoft CA that matches the template. Microsoft calls this process autoenrollment, and it simplifies certificate deployment. To enable LDAPS on your AWS Microsoft AD domain controllers, you create a certificate template in the Microsoft CA that generates SSL and TLS-compatible certificates. The domain controllers see the template and automatically import a certificate of that type from the Microsoft CA. The imported certificate enables LDAP encryption.

Steps to enable LDAPS for your AWS Microsoft AD directory

The rest of this post is composed of the steps for enabling LDAPS for your AWS Microsoft AD directory. First, though, I explain which components you must have running to deploy this solution successfully. I also explain how this solution works and include an architecture diagram.

Prerequisites

The instructions in this post assume that you already have the following components running:

The solution setup

The following diagram illustrates the setup with the steps you need to follow to enable LDAPS for AWS Microsoft AD. You will learn how to set up a subordinate Microsoft enterprise CA (in this case, SubordinateCA) and join it to your AWS Microsoft AD domain (in this case, corp.example.com). You also will learn how to create a certificate template on SubordinateCA and configure AWS security group rules to enable LDAPS for your directory.

As a prerequisite, I already created a standalone Microsoft root CA (in this case RootCA) for creating SubordinateCA. RootCA also has a local user account called RootAdmin that has administrative permissions to issue certificates to SubordinateCA. Note that you may already have a root CA or a multi-level CA hierarchy in your on-premises network that you can use for creating SubordinateCA instead of creating a new root CA. If you choose to use your existing on-premises CA hierarchy, you must have administrative permissions on your on-premises CA to issue a certificate to SubordinateCA.

Lastly, I also already created an Amazon EC2 instance (in this case, Management) that I use to manage users, configure AWS security groups, and test the LDAPS connection. I join this instance to the AWS Microsoft AD directory domain.

Add a Microsoft enterprise CA to your AWS Microsoft AD domain (in this case, SubordinateCA) so that it can issue certificates to your directory domain controllers to enable LDAPS. This step includes joining SubordinateCA to your directory domain, installing the Microsoft enterprise CA, and obtaining a certificate from RootCA that grants SubordinateCA permissions to issue certificates.

I now will show you these steps in detail. I use the names of components—such as RootCA, SubordinateCA, and Management—and refer to users—such as Admin, RootAdmin, and CAAdmin—to illustrate who performs these steps. All component names and user names in this post are used for illustrative purposes only.

Deploy the solution

Step 1: Delegate permissions to CA administrators

In this step, you delegate permissions to your users who manage your CAs. Your users then can join a subordinate CA to your AWS Microsoft AD domain and create the certificate template in your CA.

In this step, you set up a subordinate Microsoft enterprise CA and join it to your AWS Microsoft AD directory domain. I will summarize the process first and then walk through the steps.

First, you create an Amazon EC2 for Windows Server instance called SubordinateCA and join it to the domain, corp.example.com. You then publish RootCA’s public certificate and certificate revocation list (CRL) to SubordinateCA’s local trusted store. You also publish RootCA’s public certificate to your directory domain. Doing so enables SubordinateCA and your directory domain controllers to trust RootCA. You then install the Microsoft enterprise CA service on SubordinateCA and request a certificate from RootCA to make SubordinateCA a subordinate Microsoft CA. After RootCA issues the certificate, SubordinateCA is ready to issue certificates to your directory domain controllers.

Note that you can use an Amazon S3 bucket to pass the certificates between RootCA and SubordinateCA.

In detail, here is how the process works, as illustrated in the preceding diagram:

Set up an Amazon EC2 instance joined to your AWS Microsoft AD directory domain – Create an Amazon EC2 for Windows Server instance to use as a subordinate CA, and join it to your AWS Microsoft AD directory domain. For this example, the machine name is SubordinateCA and the domain is corp.example.com.

Share RootCA’s public certificate with SubordinateCA – Log in to RootCA as RootAdmin and start Windows PowerShell with administrative privileges. Run the following commands to copy RootCA’s public certificate and CRL to the folder c:\rootcerts on RootCA.

The following screenshot shows RootCA’s public certificate and CRL uploaded to an S3 bucket.

Publish RootCA’s public certificate to your directory domain – Log in to SubordinateCA as the CAAdmin. Download RootCA’s public certificate and CRL from the S3 bucket by following the instructions in How Do I Download an Object from an S3 Bucket? Save the certificate and CRL to the C:\rootcerts folder on SubordinateCA. Add RootCA’s public certificate and the CRL to the local store of SubordinateCA and publish RootCA’s public certificate to your directory domain by running the following commands using Windows PowerShell with administrative privileges.

Install the subordinate Microsoft enterprise CA – Install the subordinate Microsoft enterprise CA on SubordinateCA by following the instructions in Install a Subordinate Certification Authority. Ensure that you choose Enterprise CA for Setup Type to install an enterprise CA.

For the CA Type, choose Subordinate CA.

Request a certificate from RootCA – Next, copy the certificate request on SubordinateCA to a folder called c:\CARequest by running the following commands using Windows PowerShell with administrative privileges.

New-Item c:\CARequest -type directory
Copy c:\*.req C:\CARequest

Upload the certificate request to the S3 bucket.

Approve SubordinateCA’s certificate request – Log in to RootCA as RootAdmin and download the certificate request from the S3 bucket to a folder called CARequest. Submit the request by running the following command using Windows PowerShell with administrative privileges.

In the Certification Authority window, expand the ROOTCA tree in the left pane and choose Pending Requests. In the right pane, note the value in the Request ID column. Right-click the request and choose All Tasks > Issue.

Retrieve the SubordinateCA certificate – Retrieve the SubordinateCA certificate by running following command using Windows PowerShell with administrative privileges. The command includes the <RequestId> that you noted in the previous step.

certreq –retrieve <RequestId><drive>:\subordinateCA.crt

Upload SubordinateCA.crt to the S3 bucket.

Install the SubordinateCA certificate – Log in to SubordinateCA as the CAAdmin and download SubordinateCA.crt from the S3 bucket. Install the certificate by running following commands using Windows PowerShell with administrative privileges.

certutil –installcert c:\subordinateCA.crt
start-service certsvc

Delete the content that you uploaded to S3 –As a security best practice, delete all the certificates and CRLs that you uploaded to the S3 bucket in the previous steps because you already have installed them on SubordinateCA.

You have finished setting up the subordinate Microsoft enterprise CA that is joined to your AWS Microsoft AD directory domain. Now you can use your subordinate Microsoft enterprise CA to create a certificate template so that your directory domain controllers can request a certificate to enable LDAPS for your directory.

Step 3: Create a certificate template

In this step, you create a certificate template with server authentication and autoenrollment enabled on SubordinateCA. You create this new template (in this case, ServerAuthentication) by duplicating an existing certificate template (in this case, Domain Controller template) and adding server authentication and autoenrollment to the template.

You have finished creating a certificate template with server authentication and autoenrollment enabled on SubordinateCA. Your AWS Microsoft AD directory domain controllers can now obtain a certificate through autoenrollment to enable LDAPS.

Step 4: Configure AWS security group rules

In this step, you configure AWS security group rules so that your directory domain controllers can connect to the subordinate CA to request a certificate. To do this, you must add outbound rules to your directory’s AWS security group (in this case, sg-4ba7682d) to allow all outbound traffic to SubordinateCA’s AWS security group (in this case, sg-6fbe7109) so that your directory domain controllers can connect to SubordinateCA for requesting a certificate. You also must add inbound rules to SubordinateCA’s AWS security group to allow all incoming traffic from your directory’s AWS security group so that the subordinate CA can accept incoming traffic from your directory domain controllers.

You have completed the configuration of AWS security group rules to allow traffic between your directory domain controllers and SubordinateCA.

Step 5: AWS Microsoft AD enables LDAPS

The AWS Microsoft AD domain controllers perform this step automatically by recognizing the published template and requesting a certificate from the subordinate Microsoft enterprise CA. The subordinate CA can take up to 180 minutes to issue certificates to the directory domain controllers. The directory imports these certificates into the directory domain controllers and enables LDAPS for your directory automatically. This completes the setup of LDAPS for the AWS Microsoft AD directory. The LDAP service on the directory is now ready to accept LDAPS connections!

Step 6: Test LDAPS access by using the LDP tool

In this step, you test the LDAPS connection to the AWS Microsoft AD directory by using the LDP tool. The LDP tool is available on the Management machine where you installed Active Directory Administration Tools. Before you test the LDAPS connection, you must wait up to 180 minutes for the subordinate CA to issue a certificate to your directory domain controllers.

To test LDAPS, you connect to one of the domain controllers using port 636. Here are the steps to test the LDAPS connection:

Switch to the tree view and navigate to corp.example.com>CORP> Domain Controllers. In the right pane, right-click on one of the domain controllers and choose Properties. Copy the DNS name of the domain controller.

Launch the LDP.exe tool by launching Windows PowerShell and running the LDP.exe command.

In the LDP tool, choose Connection > Connect.

In the Server box, paste the DNS name you copied in the previous step. Type 636 in the Port box. Choose OK to test the LDAPS connection to port 636 of your directory.

You should see the following message to confirm that your LDAPS connection is now open.

You have completed the setup of LDAPS for your AWS Microsoft AD directory! You can now encrypt LDAP communications between your Windows and Linux applications and your AWS Microsoft AD directory using LDAPS.

Summary

In this blog post, I walked through the process of enabling LDAPS for your AWS Microsoft AD directory. Enabling LDAPS helps you protect PII and other sensitive information exchanged over untrusted networks between your Windows and Linux applications and your AWS Microsoft AD. To learn more about how to use AWS Microsoft AD, see the Directory Service documentation. For general information and pricing, see the Directory Service home page.

If you have comments about this blog post, submit a comment in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Directory Service forum.

Earlier this year, AWS Identity and Access Management (IAM) introduced service-linked roles, which provide you an easy and secure way to delegate permissions to AWS services. Each service-linked role delegates permissions to an AWS service, which is called its linked service. Service-linked roles help with monitoring and auditing requirements by providing a transparent way to understand all actions performed on your behalf because AWS CloudTrail logs all actions performed by the linked service using service-linked roles. For information about which services support service-linked roles, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles.

Today, IAM added support for the deletion of service-linked roles through the IAM console and the IAM API/CLI. This means you now can revoke permissions from the linked service to create and manage AWS resources in your account. When you delete a service-linked role, the linked service no longer has the permissions to perform actions on your behalf. To ensure your AWS services continue to function as expected when you delete a service-linked role, IAM validates that you no longer have resources that require the service-linked role to function properly. This prevents you from inadvertently revoking permissions required by an AWS service to manage your existing AWS resources and helps you maintain your resources in a consistent state. If there are any resources in your account that require the service-linked role, you will receive an error when you attempt to delete the service-linked role, and the service-linked role will remain in your account. If you do not have any resources that require the service-linked role, you can delete the service-linked role and IAM will remove the service-linked role from your account.

In this blog post, I show how to delete a service-linked role by using the IAM console. To learn more about how to delete service-linked roles by using the IAM API/CLI, see the DeleteServiceLinkedRole API documentation.

How to delete a service-linked role by using the IAM console

If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. To delete a service-linked role, you must have permissions for the iam:DeleteServiceLinkedRole action. For example, the following IAM policy grants the permission to delete service-linked roles used by Amazon Redshift. To learn more about working with IAM policies, see Working with Policies.

Navigate to the IAM console and choose Roles from the navigation pane.

Choose the service-linked role you want to delete and then choose Delete role. In this example, I choose the AWSServiceRoleForRedshift service-linked role.

A dialog box asks you to confirm that you want to delete the service-linked role you have chosen. In the Last activity column, you can see when the AWS service last used the service-linked role, which tells you when the linked service last used the service-linked role to perform an action on your behalf. If you want to continue to delete the service-linked role, choose Yes, delete to delete the service-linked role.

IAM then checks whether you have any resources that require the service-linked role you are trying to delete. While IAM checks, you will see the status message, Deletion in progress, below the role name.

If no resources require the service-linked role, IAM deletes the role from your account and displays a success message on the console.

If there are AWS resources that require the service-linked role you are trying to delete, you will see the status message, Deletion failed, below the role name.

If you choose View details, you will see a message that explains the deletion failed because there are resources that use the service-linked role.

Choose View Resources to view the Amazon Resource Names (ARNs) of the first five resources that require the service-linked role. You can delete the service-linked role only after you delete all resources that require the service-linked role. In this example, only one resource requires the service-linked role.

Conclusion

Service-linked roles make it easier for you to delegate permissions to AWS services to create and manage AWS resources on your behalf and to understand all actions the service will perform on your behalf. If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. However, before you delete a service-linked role, you must delete all the resources associated with that role to ensure that your resources remain in a consistent state.

If you have any questions, submit a comment in the “Comments” section below. If you need help working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

To help secure your AWS resources, AWS recommends that you follow the AWS Identity and Access Management (IAM) best practice of enabling multi-factor authentication (MFA) for the root user of your account. With MFA turned on, the root user of your account is required to submit one form of authentication, which is the account password, and another form of authentication, such as a one-time password (OTP) from an MFA device. If you have MFA enabled on your root account and you lose or misplace your root MFA device, you can now reset it by using the AWS Management Console.

Now, your root user can use the AWS sign-in page to verify your root account’s email address and phone number. Then, the root user can deactivate the lost MFA device and set up a new MFA device in its place. Note that this information verification feature is available only for AWS root users with a phone number associated with their root account. If your root user does not have a valid phone number associated with your root account, the root user must call AWS Support to reset the lost MFA device.

In this blog post, I demonstrate how to reset a lost MFA device faster by using the AWS Management Console to verify your root user’s email address and phone number. I then demonstrate how to set up a virtual MFA device that you can use in place of the lost MFA device.

Note: This feature is available only to AWS accounts created before September 14, 2017. If you created your account after September 14, 2017, contact AWS Support to reset your lost MFA device.

Reset a lost MFA device

In this section, I demonstrate how to reset a lost MFA device. To reset your MFA device, you must know and have access to the email address and phone number associated with your root account.

AWS sends an email with the subject line, AWS Email Verification, to the address associated with the root account. After the email is sent to your address, you will see Email sent under Step 1, as shown in the following screenshot. If you do not see the verification email in the root user’s inbox, check the spam folder or choose Resend the email under Step 1. After you locate the email, you can close the current browser tab. Follow the directions in the email to proceed with the verification process.

When you click the verification link, your email is verified and you are taken to Step 2 of the verification process. In Step 2: Phone number verification, choose Call me now to start the phone number verification process.

Answer the phone call from AWS and use your phone’s keypad to submit the six-digit verification code that appears on your computer screen.

After you have verified your root account’s email address and phone number, proceed to Step 3: Sign In. In Step 3, choose Sign in to the console to sign in to the AWS Management Console.

You automatically are redirected to the Your Security Credentials page. If your MFA device is lost, deactivate the MFA device by choosing Deactivate (see the following screenshot). If you find your MFA device later, you can reactivate it on the same Your Security Credentials page. (A reactivated device is treated like a new device, so choose Activate MFA to reactivate a device.)

You have successfully deactivated your lost MFA device. You will no longer see any details associated with the lost MFA device in the console. You now will see an Activate MFA option (see the following screenshot) that you can use to activate a new MFA device.

We recommend that you enable a new MFA device on your root account as soon as possible to ensure that your root account is protected by MFA. If you find your lost MFA device, you can reactivate it (see Step 9 earlier in this post).

In place of your lost MFA device, you can use a virtual MFA device to ensure that your root account remains protected by MFA. In the next section, I show how to set up a virtual MFA device and associate it with your root account.

Associate a virtual MFA device with your root account

After you deactivate your lost MFA device, you can associate a virtual MFA device with your root account to help secure your AWS resources. You need to download a virtual MFA app such as Google Authenticator or Authy 2-Factor Authentication to use virtual MFA with your AWS account.

To associate a virtual MFA device with your root account:

Choose Activate MFA on the Your Security Credentials page.

Choose a virtual MFA device and then choose Next Step.

If you do not have an AWS MFA-compatible application, install one of the available applications. Choose Next Step.

Open the virtual MFA app on your phone and choose the option to create a new account.

Use the app to scan the QR code on your computer screen. Alternatively, you can choose Show secret key for manual configuration, and then type the secret key in the MFA app.

In the Authentication code 1 box, type the OTP that appears in the virtual MFA app. Wait for up to 30 seconds for the app to generate a second OTP. Type the second OTP in the Authentication code 2 box and then choose Activate virtual MFA.

You have now successfully enabled virtual MFA and associated it with your root account, and your root account is now protected by using MFA. You will use the virtual MFA app to generate an authentication code for subsequent sign-ins.

Summary

In this blog post, I demonstrated how you can reset your AWS root account’s lost MFA device by using the AWS Management Console. I also showed how you can associate a virtual MFA device with your root account.

If you have comments about resetting an MFA device for root users, submit them in the “Comments” section below. If you have implementation questions, start a thread on the IAM forum or contact AWS Support.

Today, we updated the AWS Identity and Access Management (IAM) console to make it easier for you to create, manage, and understand IAM roles. We made improvements that include an updated role-creation workflow that better guides you through the process of creating trust relationships (which define who can assume a role) and attaching permissions to roles. Additionally, you can now view and understand the permissions attached to roles more easily by using policy summaries for each role in your account.

In this post, I provide background information about roles, show how to create roles correctly by using the updated IAM console, and review other changes to the role list and details pages that help you better manage your roles.

Background information about IAM roles

What are IAM roles?

IAM roles are a secure way to grant permissions to entities you trust. You can use a role in the following scenarios:

What are the different parts of an IAM role?

As illustrated in the following diagram, roles have two types of policies attached to them:

Trust policy – This policy defines the entities that can assume a role. When you create a role by using the IAM console, the trust policy is created for you. You can customize the trust policy after it has been created. A role can have only one trust policy.

Permissions policies – These policies define which AWS resources a role can access and the actions it can perform on those resources. These policies can be AWS managed policies, customer managed policies, or inline policies attached directly to the role. You can attach multiple permissions policies to an IAM role.

How do I use an IAM role?

Roles can be used to access AWS both programmatically and via the AWS Management Console.

If you are using AWS services such as EC2 or AWS Lambda, you can assign a role to your EC2 instance or Lambda function. Your code running on either the instance or function will then be able to access AWS using the short-term credentials from that role. These services automatically assume the role and use the returned credentials to enable you to access resources or perform actions. Learn more about IAM roles for EC2 and IAM roles for Lambda.

Using a role to access the AWS Management Console

You can also assume a role to access an account in the AWS Management Console by using Switch Role functionality. You can access this functionality by using the drop-down menu in the upper right corner of the console or through a custom sign-in URL. To switch roles, you must sign in as either an IAM or federated user. Swith Role is not supported for root users.

Introducing the updated role-creation workflow

To make it easier to create IAM roles, we’ve updated the role-creation workflow in the IAM console. To introduce the new experience, let’s look at a specific example.

Let’s suppose I am developing a new application running on an EC2 instance that reads and writes image data to S3. To enable that application to access the S3 bucket where the image data is stored, I need to create a new role that the EC2 instance running my application code can assume. This role trusts EC2 (defined in the trust policy) and has permissions to read and write to S3 (defined in the permissions policy).

I first navigate to the Roles page of the IAM console, and choose Create role.

I then select the type of trusted entity this role will allow. I have four options in this step:

AWS service – To grant access to an AWS service to manage resources in my account.

Another AWS account – To grant access to my account from another account.

SAML – To grant access to identities from a SAML-enabled identity provider.

For this example, I choose AWS service. I then see a list of AWS services the new role could trust. Because I need a role that can be assumed by an EC2 instance, I choose EC2. This opens a section below it with specific use cases for this service. I choose the EC2 (Allows EC2 instances to call AWS services on your behalf) use case. This defines the role’s trust policy to include EC2 as an entity that can assume that role.

Then, I set the permissions that the role needs to access S3. I already created a permissions policy that grants access to my S3 bucket and named it MyImageApp-S3Access. I choose that policy and choose Next to move to the final step.

In the final step, I name the role MyImageAppRole and describe it this way: “Allows application code running on EC2 instances to access data in S3.” I choose Create role and I am done! I can now attach this role to EC2 instances on which my application is running to give them permissions to access data in S3.

Other updates to help you manage your roles

With this update, you will see a new column called Trusted entities on the Roles list page. This column allows you to review at a glance which entities can assume a role. This makes it easier to identify roles that trust a specific account or AWS service. It also makes it easier to audit the trust relationships across all your roles. You also can add other columns to this table, including Creation time and Role ARN, by clicking the gear icon. Your column selection preference will be remembered when you return to the Roles page.

To help you understand the permissions attached to your role, we’ve also updated the Permissions tab to include IAM policy summaries. Policy summaries make it easier for you to understand the permissions for IAM permissions policies attached to roles without having to view a policy’s JSON.

Conclusion

Now it is easier for you to create and manage your IAM roles using the IAM console. As you manage your roles, be sure to review and follow IAM best practices, which can help to improve the security of your AWS resources and make your account easier to manage.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, start a new thread on the IAM forum or contact AWS Support.

Simple AD, which is powered by Samba 4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Here is how the solution works, as shown in the preceding numbered diagram:

The LDAP client sends an LDAPS request to ELB on TCP port 636.

ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.

The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.

The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:

Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.

Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.

Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.

The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).

We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

Create a Simple AD directory.

Create a certificate.

Create the ELB and HAProxy layers by using the supplied CloudFormation template.

Create a Route 53 record.

Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.

NetBIOS name – The short name for the directory, such as CORP.

Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.

Description – An optional description for the directory.

Directory Size – The size of the directory.

Provide the following information in the VPC Details section, and then choose Next Step:

VPC – Specify the VPC in which to install the directory.

Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.

Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.

Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.

[[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
You are about to be asked to enter information that will be incorporated into your certificate request.

Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.

Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.

Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.

Input parameter

Input parameter description

HAProxyInstanceSize

The EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.

MyKeyPair

The SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.

VPCId

The target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.

SubnetId1

The Simple AD primary subnet. This information is available in your Simple AD directory details page.

SubnetId2

The Simple AD secondary subnet. This information is available in your Simple AD directory details page.

MyTrustedNetwork

Trusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.

When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.

On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:

Name – The label of the record (such as ldap).

Type – Leave as A – IPv4 address.

Alias – Choose Yes.

Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.

Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.

SSH into the instance and complete the following steps to verify access.

Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.

Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.

The value for BASE should match the configuration of the Simple AD directory name.

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.

$ dig ldap.corp.example.com

Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.

Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.

$ echo -n | openssl s_client -connect ldap.corp.example.com:636

Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

Today, AWS made improvements to the way you sign in to your AWS account. Whether you sign in as your account’s root user or an AWS Identity and Access Management (IAM) user, you can now sign in from the AWS Management Console’s homepage. This means that if you sign in as an IAM user, you no longer have to use an account-specific URL. However, the account-specific URL you have used in the past to sign in will continue to work.

In the new sign-in experience, you can sign in from the home page using either your root user’s or IAM user’s credentials. In the first step, root users enter their email address; IAM users enter their account ID (or account alias). In the second step, root users enter their password; IAM users enter their user name and password.

In this blog post, I explain the improvements to the way you sign in to your AWS account as a root user or IAM user. If you use a password manager to help you sign in to your account, you may need to make updates so that it will work with the new sign-in experience.

The new sign-in experience

The new AWS sign-in experience allows both root users and IAM users to sign in using the Sign In to the Console link on the AWS Management Consoles’s home page.

Step 1: For root users and IAM users

As shown in the following screenshot, to sign in as a root user, type the email address associated with the root account. To sign in as an IAM user, type an AWS account ID or account alias. Then choose Next to proceed to Step 2.

If you usually sign in using the same browser and allow the browser to store AWS cookies, you will skip Step 1 on subsequent sign-in attempts. If you regularly switch users or accounts, AWS recommends that you prevent the sign-in page from storing AWS cookies.

Step 2: For root users

If you entered the email address associated with the root account in Step 1, you were taken to the second step to sign in to the root account, as shown in the following screenshot. Type the password of the root account and choose Sign in. If you enabled multi-factor authentication (MFA) for your root account, you will then be prompted to enter the code from your MFA device. After successful authentication, you will be signed in to the AWS Management Console, and your account’s home page will be displayed.

Step 2: For IAM users

If you entered an AWS account ID or account alias in Step 1, you were taken to the second step to sign in as an IAM user, as shown in the following screenshot. Type the user name and password of the IAM user, and choose Sign in. If MFA has been enabled for your IAM user, you will then be prompted to enter the code from your MFA device. After successful authentication, your account’s home page will be displayed.

With these changes, you may need to make updates to password managers so that they will work with the new sign-in experience.

If you have comments about the changes to how you sign in to your AWS account as a root user or IAM user, submit a comment in the “Comments” section below. If you have questions, start a new thread on the IAM forum.

You can help to protect your data in a number of ways while it is in transit and at rest, such as by using Secure Sockets Layer (SSL) or client-side encryption. AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create, control, rotate, and use your encryption keys. AWS KMS allows you to create custom keys, which you can share with AWS Identity and Access Management users and roles in your AWS account or in an AWS account owned by someone else.

In a new AWS DevOps Blog post, BK Chaurasiya describes a solution for building a cross-region/cross-account code deployment solution on AWS. BK explains options for helping to protect your source code as it travels between regions and between AWS accounts.

To govern federated access to your AWS resources, it’s a common practice to use Microsoft Active Directory (AD) groups. When using AD groups, establishing federation requires the number of AD groups to be equal to the number of your AWS accounts multiplied by the number of roles in each of your AWS accounts. As you can imagine, this can result in the creation of a very large number of AD groups to manage federated access.

However, some organizations have limits on how many AD groups they can create. For example, an organization might need to keep its AD group hierarchy reasonably flat and avoid a deep nesting of groups. Such a situation needs a solution that doesn’t require you to create exponentially more AD groups while still allowing you to use access control and automated user integration.

In this blog post, I provide step-by-step instructions for integrating AWS Identity and Access Management (IAM) with Microsoft Active Directory Federation Services (AD FS) by using AD user attributes, allowing you to establish federated access without expanding your total number of AD groups. Your organization’s enterprise administrator probably has existing processes in place for managing AD group memberships and provisioning, and you can extend these processes to the management of AD user attributes and the reduction of your organization’s dependency on AD groups.

Prerequisites

You have created an identity provider (IdP) in your AWS account using your XML file (https://<your-server-name-here>/FederationMetadata/2007-06/FederationMetadata.xml) from your AD FS server. Remember the name of your IdP because you will use it later in this solution.

You have created the appropriate IAM roles in your AWS account, which will be used for federated access.

After you satisfy these prerequisites, you can proceed to the next section of this post to configure your AD users and AD FS server.

Solution overview

To benefit fully from the solution in this post, your AD and AD FS environment should look similar to what is shown in the following diagram. I focus this post on AD users and claim rules in an AD FS server. AD FS claim rules provide the logic to identify who has been correctly set up in AD with the appropriate user attributes to sign in via AD FS to the AWS Management Console.

In the preceding diagram:

An AD user (let’s call him Bob) browses to the AD FS sample site (https://Fully.Qualified.Domain.Name.Here/adfs/ls/IdpInitiatedSignOn.aspx) inside this domain.

The sign-in page authenticates Bob against AD. If Bob is already authenticated or using a domain joined workstation, he also might be prompted for his AD user name and password.

Bob’s browser receives a SAML assertion in the form of an authentication response from AD FS. Bob’s access is authorized based on his AD group membership or on AD user attributes configured on his account.

Bob’s browser automatically posts the SAML assertion to the AWS sign-in endpoint for SAML (https://signin.aws.amazon.com/saml). The endpoint uses the AssumeRoleWithSAML API to request temporary security credentials and then constructs a sign-in URL for the AWS Management Console using those credentials.

Bob’s browser receives the sign-in URL and redirects to the AWS Management Console.

Deploy the solution

A. Configure an AD user’s account

Because the AD user attributes hold all the associated AWS account and role information when using this solution, you will start by configuring an AD user’s accounts.

To edit the user attributes in an AD user’s account:

On your AD server, in the Active Directory Users and Computers console, go to View > Advanced Features in Active Directory Users and Computers to see the Attribute editor tab.

For AD user Bob, edit one attribute using the built-in AD attribute editor. The attribute I am using is url, which is a multi-valued string. If you use another AD user attribute, consider how you will need to modify your AD FS claim rules later because different attributes may return the values differently back to the AD FS server. The name of the AD user attribute will be used in the AD FS claim rules later in this post.

Bob has two AWS accounts: 111122223333 and 444455556666. Each of Bob’s AWS accounts has two roles: AWS-Dev and AWS-ReadOnly. I have configured Bob’s url attribute with the corresponding values associated with his AWS accounts and roles. As part of the attribute entries, I prefixed each entry with AWS- to have a unique identifier. As shown in the following screenshot, I added the entries one at a time so that each value can be returned back to my AD FS server:

AWS-111122223333-Dev

AWS-111122223333-ReadOnly

AWS-444455556666-Dev

AWS-444455556666-ReadOnly

Bob also requires an email address because that information will be used in the role session name when Bob signs in to the AWS Management Console via his chosen AWS account and associated role. We use Bob’s email address only because it’s a common user attribute most users have and is also unique across users. The unique identifier is then forwarded by AD FS to be used by AWS as a unique value for users. If you have enabled AWS CloudTrail, the role session name is captured in CloudTrail and allows for ease of identification of who assumed the role and subsequent API calls the user or role might have executed on the platform (for example, ec2:terminateinstance).

Now that you have configured Bob’s account, you will configure the AD FS server claim rules.

B. Configure the AD FS server claim rules

Because this blog post assumes your environment is already up and running and to ensure that you can follow along, I am providing example Windows PowerShell code that you can run on your AD FS server. This code allows you to choose a conventional approach by using AD groups in AD FS claim rules, or for the purposes of this post, to use AD FS claim rules with AD user attributes. If you use the AD group approach on your AD FS server with the example code, your AD group naming convention must be: AWS-YourAccountNumber–YourRoleName. If you have already created claim rules for AWS on your AD FS server, I encourage you to run this code against a different AD FS server that has no existing AWS rules.

To configure the AD FS claim rules:

Open the AD FS console. You can find it by searching for ad, as shown in the following screenshot.

Expand Trust Relationships and choose Relying Party Trusts.

Run the example Windows PowerShell code from the command prompt in the same directory where you extracted the .zip file. The following screenshot shows a list of the example files from the .zip file.

Run the 01-Configure-ADFS-AD-User-URL-mapping.ps1 Windows PowerShell script to set up the AD FS claim rules. Note: Run this script with Administrative permissions. A log file is generated to which you can refer, as shown in the following screenshot.

After you run the Windows PowerShell script, you will see the new relying party trust that has been created in your AD FS configuration for Amazon Web Services, as shown in the following screenshot.

The following screenshot shows what your AD FS server claim rules should look like now.

About these four claim rules:

Claim rule 1 captures the Windows account name of the current user whose attributes will then be queried further with claim rule 3.

Claim rule 2 captures Bob’s email address for use in the role session name.

Claim rule 3 queries the current user’s URL attributes to identify which account and role the user is authorized to assume access to. These URL attribute values are then stored in a variable (http://temp/variable) for use in claim rule 4.

Claim rule 4 works by matching the first pattern match, ([^d]{12}), to $1 and the second pattern match, (\w*), to $2 for each entry in http://temp/variable. With this final rule, you define the resulting value for the AWS role attribute in a dynamic way, which allows the configuration to scale to support virtually any number of AWS accounts and IAM roles without further configuration within AD FS. By using these claim rules, you query, store, and then convert the values in the URL attributes to the IAM role attributes that AWS expects.

At the beginning of this post, I mentioned that you need to remember the name of the IdP you created in your AWS account, and now is when you will use your IdP’s name. Replace myADFS, highlighted in the following code, with the name of your IdP. (When modifying the rules, be careful not to insert any additional spaces because they can cause claim rules to not work as designed.)

C. Test AD user Bob’s federated access

Go to the AD FS sign-in page (https://Fully.Qualified.Domain.Name.Here/adfs/ls/IdpInitiatedSignOn.aspx) to test Bob’s federated access. Note that you might see a certificate warning if the server uses a locally self-signed certificate from Internet Information Services.

To test Bob’s federated access:

Choose Sign in to one of the following sites, choose Amazon Web Services& AD User URL from the list, and then choose Continue to Sign In.

If prompted, type Bob’s user name and password. You will be redirected to sign in to the Amazon Web Services AD FS page previously defined when you set up the AD FS relying party trusts.

After you authenticate to the server as Bob, your browser is redirected to https://signin.aws.amazon.com/saml, and you can choose which of Bob’s accounts and roles to use from. Choose a role and then choose Sign In.

You have signed in as Bob, and his email address now appears as part of the role session name, as shown in the following screenshot.

You can now see Bob’s email address used in the role session name. If you have enabled CloudTrail, the role session name is captured in CloudTrail and allows you to easily identify who assumed the role. If Bob wants to switch to a different account or role, he can return to his AD FS sign-in page (https://Fully.Qualified.Domain.Name.Here/adfs/ls/IdpInitiatedSignOn.aspx) and choose an alternative account or role.

Summary

In this blog post, I demonstrated how to use dynamic resolution of federated access using AD user attributes to scale your configuration and support a large number of AWS accounts and associated IAM roles. This is a powerful technique for managing a large number of AWS accounts and the federated access of associated AD users. Even though I demonstrate the integration of IAM with AD FS and AD, you can replicate this solution across your choice of SAML federated access technology, such as Shibboleth or OpenLDAP.

If you have comments about this blog post, submit them in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the IAM forum.

Today, the AWS Crypto Tools team introduced a new feature in the AWS Encryption SDK: data key caching. Data key caching lets you reuse the data keys that protect your data, instead of generating a new data key for each encryption operation.

However, these benefits come with some security tradeoffs. Encryption best practices generally discourage extensive reuse of data keys.

In this blog post, I explore those tradeoffs and provide information that can help you decide whether data key caching is a good strategy for your application. I also explain how data key caching is implemented in the AWS Encryption SDK and describe the security thresholds that you can set to limit the reuse of data keys. Finally, I provide some practical examples of using the security thresholds to meet cost, performance, and security goals.

Introducing data key caching

The AWS Encryption SDK is a client-side encryption library that makes it easier for you to implement cryptography best practices in your application. It includes secure default behavior for developers who are not encryption experts, while being flexible enough to work for the most experienced users.

In the AWS Encryption SDK, by default, you generate a new data key for each encryption operation. This is the most secure practice. However, in some applications, the overhead of generating a new data key for each operation is not acceptable.

Data key caching saves the plaintext and ciphertext of the data keys you use in a configurable cache. When you need a key to encrypt or decrypt data, you can reuse a data key from the cache instead of creating a new data key. You can create multiple data key caches and configure each one independently. Most importantly, the AWS Encryption SDK provides security thresholds that you can set to determine how much data key reuse you will allow.

To make data key caching easier to implement, the AWS Encryption SDK provides LocalCryptoMaterialsCache, an in-memory, least-recently-used cache with a configurable size. The SDK manages the cache for you, including adding store, search, and match logic to all encryption and decryption operations.

We recommend that you use LocalCryptoMaterialsCache as it is, but you can customize it, or substitute a compatible cache. However, you should never store plaintext data keys on disk.

The AWS Encryption SDK documentation includes sample code in Java and Python for an application that uses data key caching to encrypt data sent to and from Amazon Kinesis Streams.

Balance cost and security

Your decision to use data key caching should balance cost—in time, money, and resources—against security. In every consideration, though, the balance should favor your security requirements. As a rule, use the minimal caching required to achieve your cost and performance goals.

Before implementing data key caching, consider the details of your applications, your security requirements, and the cost and frequency of your encryption operations. In general, your application can benefit from data key caching if each operation is slow or expensive, or if you encrypt and decrypt data frequently. If the cost and speed of your encryption operations are already acceptable or can be improved by other means, do not use a data key cache.

Data key caching can be the right choice for your application if you have high encryption and decryption traffic. For example, if you are hitting your KMS requests-per-second limit, caching can help because you get some of your data keys from the cache instead of calling KMS for every request.

However, you can also create a case in the AWS Support Center to raise the KMS limit for your account. If raising the limit solves the problem, you do not need data key caching.

Configure caching thresholds for cost and security

In the AWS Encryption SDK, you can configure data key caching to allow just enough data key reuse to meet your cost and performance targets while conforming to the security requirements of your application. The SDK enforces the thresholds so that you can use them with any compatible cache.

The data key caching security thresholds apply to each cache entry. The AWS Encryption SDK will not use the data key from a cache entry that exceeds any of the thresholds that you set.

Maximum age (required): Set the lifetime of each cached key to be long enough to get cache hits, but short enough to limit exposure of a plaintext data key in memory to a specific time period.

You can use the maximum age threshold like a key rotation policy. Use it to limit the reuse of data keys and minimize exposure of cryptographic materials. You can also use it to evict data keys when the type or source of data that your application is processing changes.

Maximum messages encrypted (optional; default is 232 messages): Set the number of messages protected by each cached data key to be large enough to get value from reuse, but small enough to limit the number of messages that might potentially be exposed.

The AWS Encryption SDK only caches data keys that use an algorithm suite with a key derivation function. This technique avoids the cryptographic limits on the number of bytes encrypted with a single key. However, the more data that a key encrypts, the more data that is exposed if the data key is compromised.

Limiting the number of messages, rather than the number of bytes, is particularly useful if your application encrypts many messages of a similar size or when potential exposure must be limited to very few messages. This threshold is also useful when you want to reuse a data key for a particular type of message and know in advance how many messages of that type you have. You can also use an encryption context to select particular cached data keys for your encryption requests.

Maximum bytes encrypted (optional; default is 263 – 1): Set the bytes protected by each cached data key to be large enough to allow the reuse you need, but small enough to limit the amount of data encrypted under the same key.

Limiting the number of bytes, rather than the number of messages, is preferable when your application encrypts messages of widely varying size or when possibly exposing large amounts of data is much more of a concern than exposing smaller amounts of data.

In addition to these security thresholds, the LocalCryptoMaterialsCache in the AWS Encryption SDK lets you set its capacity, which is the maximum number of entries the cache can hold.

Use the capacity value to tune the performance of your LocalCryptoMaterialsCache. In general, use the smallest value that will achieve the performance improvements that your application requires. You might want to test with a very small cache of 5–10 entries and expand if necessary. You will need a slightly larger cache if you are using the cache for both encryption and decryption requests, or if you are using encryption contexts to select particular cache entries.

Consider these cache configuration examples

After you determine the security and performance requirements of your application, consider the cache security thresholds carefully and adjust them to meet your needs. There are no magic numbers for these thresholds: the ideal settings are specific to each application, its security and performance requirements, and budget. Use the minimal amount of caching necessary to get acceptable performance and cost.

The following examples show ways you can use the LocalCryptoMaterialsCache capacity setting and the security thresholds to help meet your security requirements:

Slow master key operations: If your master key processes only 100 transactions per second (TPS) but your application needs to process 1,000 TPS, you can meet your application requirements by allowing a maximum of 10 messages to be protected under each data key.

High frequency and volume: If your master key costs $0.01 per operation and you need to process a consistent 1,000 TPS while staying within a budget of $100,000 per month, allow a maximum of 275 messages for each cache entry.

Burst traffic: If your application’s processing bursts to 100 TPS for five seconds in each minute but is otherwise zero, and your master key costs $0.01 per operation, setting maximum messages to 3 can achieve significant savings. To prevent data keys from being reused across bursts (55 seconds), set the maximum age of each cached data key to 20 seconds.

Expensive master key operations: If your application uses a low-throughput encryption service that costs as much as $1.00 per operation, you might want to minimize the number of operations. To do so, create a cache that is large enough to contain the data keys you need. Then, set the byte and message limits high enough to allow reuse while conforming to your security requirements. For example, if your security requirements do not permit a data key to encrypt more than 10 GB of data, setting bytes processed to 10 GB still significantly minimizes operations and conforms to your security requirements.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, file an issue in the GitHub repos for the Encryption SDK in Java or Python, or start a new thread on the KMS forum.

To help you grant access to specific resources and conditions, the Example Policies page in the AWS Identity and Access Management (IAM) documentation now includes more than thirty policies for you to use or customize to meet your permissions requirements. The AWS Support team developed these policies from their experiences working with AWS customers over the years. The example policies cover common permissions use cases you might encounter across services such as Amazon DynamoDB, Amazon EC2, AWS Elastic Beanstalk, Amazon RDS, Amazon S3, and IAM.

In this blog post, I introduce the updated Example Policies page and explain how to use and customize these policies for your needs.

The new Example Policies page

The Example Policies page in the IAM User Guide now provides an overview of the example policies and includes a link to view each policy on a separate page. Note that each of these policies has been reviewed and approved by AWS Support. If you would like to submit a policy that you have found to be particularly useful, post it on the IAM forum.

To give you an idea of the policies we have included on this page, the following are a few of the EC2 policies on the page:

Choose the Select button next to Create Your Own Policy. You will see an empty policy document with boxes for Policy Name, Description, and Policy Document, as shown in the following screenshot.

Type a name for the policy, copy the policy from the Example Policies page, and paste the policy in the Policy Document box. In this example, I use “start-stop-instances-for-owner-tag” as the policy name and “Allows users to start or stop instances if the instance tag Owner has the value of their user name” as the description.

Update the placeholder text in the policy (see the full policy that follows this step). For example, replace <REGION> with a region from AWS Regions and Endpoints and <ACCOUNTNUMBER> with your 12-digit account number. The IAM policy variable, ${aws:username}, is a dynamic property in the policy that automatically applies to the user to which it is attached. For example, when the policy is attached to Bob, the policy replaces ${aws:username} with Bob. If you do not want to use the key value pair of Owner and ${aws:username}, you can edit the policy to include your desired key value pair. For example, if you want to use the key value pair, CostCenter:1234, you can modify “ec2:ResourceTag/Owner”: “${aws:username}” to “ec2:ResourceTag/CostCenter”: “1234”.

You have created a policy that allows an IAM user to stop and start EC2 instances in your account, as long as these instances have the correct resource tag and the policy is attached to your IAM users. You also can attach this policy to an IAM group and apply the policy to users by adding them to that group.

Summary

We updated the Example Policies page in the IAM User Guide so that you have a central location where you can find examples of the most commonly requested and used IAM policies. In addition to these example policies, we recommend that you review the list of AWS managed policies, including the AWS managed policies for job functions. You can choose these predefined policies from the IAM console and associate them with your IAM users, groups, and roles.

We will add more IAM policies to the Example Policies page over time. If you have a useful policy you would like to share with others, post it on the IAM forum. If you have comments about this post, submit them in the “Comments” section below.

As part of the AWS Shared Responsibility Model, you are responsible for monitoring and managing your resources at the operating system and application level. When you monitor your application servers, for example, you can measure, visualize, react to, and improve the security of those servers. You probably already do this on premises or in other environments, and you can adapt your existing processes, tools, and methodologies for use in the AWS Cloud. For more details about best practices for monitoring your AWS resources, see the “Manage Security Monitoring, Alerting, Audit Trail, and Incident Response” section in the AWS Security Best Practices whitepaper.

This blog post focuses on how to log and create alarms on invalid Secure Shell (SSH) access attempts. Implementing live monitoring and session recording facilitates the identification of unauthorized activity and can help confirm that remote users access only those systems they are authorized to use. With SSH log information in hand (such as invalid access type, bad private keys, and remote IP addresses), you can take proactive actions to protect your servers. For example, you can use an AWS Lambda function to adjust your server’s security rules when an alarm is triggered that indicates an invalid SSH access attempt.

In this post, I demonstrate how to use Amazon CloudWatch Logs to monitor SSH access to your application servers (Amazon EC2 Linux instances) so that you can monitor rejected SSH connection requests and take action. I also show how to configure CloudWatch Logs to send SSH access logs from application servers that reside in a public subnet. Last, I demonstrate how to visualize how many attempts are made to SSH into your application servers with bad private keys and invalid user names. Using these techniques and tools can help you improve the security of your application servers.

AWS services and terminology I use in this post

In this post, I use the following AWS services and terminology:

Amazon CloudWatch – A monitoring service for the resources and applications you run on the AWS Cloud. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.

CloudWatch namespaces – Containers for metrics. Metrics in different namespaces are isolated from each other so that metrics from different applications are not mistakenly aggregated into the same statistics. You also can create custom metrics for which you must specify namespaces as containers.

CloudWatch Logs – A feature of CloudWatch that allows you to monitor, store, and access your log files from EC2 instances, AWS CloudTrail, and other sources. Additionally, you can use CloudWatch Logs to monitor applications and systems by using log data and create alarms. For example, you can choose to search for a phrase in logs and then create an alarm if the phrase you are looking for is found in the log more than 5 times in the last 10 minutes. You can then take action on these alarms, if necessary.

Log stream – A log stream represents the sequence of events coming from an application instance or resource that you are monitoring. In this post, I use the EC2 instance ID as the log stream identifier so that I can easily map log entries to the instances that produced the log entries

Log group – In CloudWatch Logs, a group of log streams that share the same retention time, monitoring, and access control settings. Each log stream must belong to one log

Metric – A specific term or value that you can monitor and extract from log events.

Metric filter – A metric filter describes how Amazon CloudWatch Logs extracts information from logs and transforms it into CloudWatch metrics. It defines the terms and patterns to look for in log data as the data is sent to CloudWatch Logs. Metric filters are assigned to log groups, and all metric filters assigned to a given log group are applied to their log stream—see the following diagram for more details.

SSH logs – Reside on EC2 instances and capture all SSH activities. The logs include successful attempts as well as unsuccessful attempts. Debian Linux SSH logs reside in /var/log/auth.log, and stock CentOS SSH logs are written to /var/log/secure. This blog post uses an Amazon Linux AMI, which also logs SSH sessions to /var/log/secure.

AWS Identity and Access Management (IAM) – IAM enables you to securely control access to AWS services and resources for your users. In the solution in this post, you create an IAM policy and configure an EC2 instance that assumes a role. The IAM policy allows the EC2 instance to create log events and save them in an Amazon S3 bucket (in other words, CloudWatch Logs log files are saved in the S3 bucket).

CloudWatch dashboards – Amazon CloudWatch dashboards are customizable home pages in the CloudWatch console that you can use to monitor your resources in a single view, even those resources that are spread across different regions. You can use CloudWatch dashboards to create customized views of the metrics and alarms for your AWS resources.

Architectural overview

The following diagram depicts the services and flow of information between the different AWS services used in this post’s solution.

Here is how the process works, as illustrated and numbered in the preceding diagram:

A CloudWatch Logs agent runs on each EC2 instance. The agents are configured to send SSH logs from the EC2 instance to a log stream identified by an instance ID.

Log streams are aggregated into a log group. As a result, one log group contains all the logs you want to analyze from one or more instances.

You apply metric filters to a log group in order to search for specific keywords. When the metric filter finds specific keywords, the filter counts the occurrences of the keywords in a time-based sliding window. If the occurrence of a keyword exceeds the CloudWatch alarm threshold, an alarm is triggered.

An IAM policy defines a role that gives the EC2 servers permission to create logs in a log group and send log events (new log entries) from EC2 to log groups. This role is then assumed by the application servers.

CloudWatch alarms notify users when a specified threshold has been crossed. For example, you can set an alarm to trigger when more than 2 failed SSH connections happen in a 5-minute period.

The CloudWatch dashboard is used to visualize data and alarms from the monitoring process.

Deploy and test the solution

1. Deploy the solution by using CloudFormation

Now that I have explained how the solution works, I will show how to use AWS CloudFormation to create a stack with the desired solution configuration. CloudFormation allows you to create a stack of resources in your AWS account.

Sign in to the AWS Management Console, choose CloudFormation, choose Create Stack, choose Specify an Amazon S3 template URL and paste the following link in the box: https://s3.amazonaws.com/awsiammedia/public/sample/MonitorSSHActivities/CloudWatchLogs_ssh.yaml

Choose Launch to deploy the stack.

On the Specify Details page, enter the Stack name. Then enter the KeyName, which is the SSH key pair for the region you use. I use this key-pair later in this post; if you don’t have a key pair for your current region, follow these instructions to create one. The OperatorEmail is the CloudWatch alarm recipient email address (this field is mandatory to launch the stack), which is the email address to which SSH activity alarms will be sent. You can use the SSHLocation box to limit the IP address range that can access your instances; the default is 0.0.0/0, which means that any IP can access the instance. After specifying these variables, click Next.

On the Options page, tag your instance, and click Next. Tags allow you to assign metadata to AWS resources. For example, you can tag a project’s resources and then use the tag to manage, search for, and filter resources. For more information about tagging, see Tagging Your Amazon EC2 Resources.

Wait until the CloudFormation template shows CREATE_COMPLETE, as shown in the following screenshot. This means your stack was created successfully.

After the stack is created successfully, you have two distinct application servers running, each with a CloudWatch agent. These servers represent a fleet of servers in your infrastructure. Choose the Outputs tab to see more details about the resources, such as the public IP addresses of the servers. You will need to use these IP addresses later in this post in order to trigger log events and alarms.

The CloudWatch log agent on each server is installed at startup and configured to stream SSH log entries from /var/log/secure to CloudWatch via a log stream. CloudWatch aggregates the log streams (ssh.log) from the application servers and saves them in a CloudWatch Logs log group. Each log stream is identified by an instance-ID, as shown in the following screenshot.

The application servers assume a role that gives them permissions to create CloudWatch Logs log files and events. CloudFormation also configures two metrics: ssh/InvalidUser and ssh/Disconnect. The ssh/InvalidUser metric sends an alarm when there are more than 2 SSH attempts into any server that include an invalid user name. Similarly, the ssh/Disconnect metric creates an alarm when more than 10 SSH disconnect requests come from users within 5 minutes.

To review the metrics created by CloudFormation, choose Metrics in the CloudWatch console. A new SSH custom namespace has been created, which contains the two metrics described in the previous paragraph.

You should now have two application servers running and two custom CloudWatch metrics and alarms configured. Now, it’s time to generate log events, trigger alarms, and test the configurations.

2. Test SSH metrics and alarms

Now, let’s try to trigger an alarm by trying to SSH with an invalid user name into one of the servers. Use the key pair you specified when launching the stack and connect to one of the Linux instances from a terminal window (replace the placeholder values in the following command).

The following command is the same as the previous command, but with the placeholder values replaced by actual values.

ssh -i "my-keycap" bad-user@ec2-XX-XXX-XXX.compute-1.amazonaws.com

Because the alarm triggers after two or more unsuccessful SSH login attempts with an invalid user name in 1 minute, run the preceding command a few times. The server’s log captures the bad SSH login attempts, and after a minute, you should see InvalidUserAlarm in the CloudWatch console, as shown in the following screenshot. Choose Alarms to see more details. The alarm should disappear after another minute if there are no more SSH login attempts.

You can also view the history of your alarms by choosing the History tab. CloudWatch metrics are saved for 15 months.

When the CloudFormation stack launches, a topic-registration email is sent to the email address you specified in the template. After you accept the topic registration, you will receive an alarm email with details about the alarm. The email looks like what is shown in the following screenshot.

3. Understanding CloudWatch metric filters and their transformation

The CloudFormation template includes two alarms, InvalidUserAlarm and SSHReceiveddisconnectAlarm, and two metric filters. As I mentioned previously, the metric filters define the pattern you want to match in a CloudWatch Logs log group. When a pattern is found, it transforms into an Amazon metric as defined in the MetricTransformations section of the metric filter.

The following is a snippet of the InvalidUser metric filter. Each pattern match—denoted by FilterPattern—is counted as one metric value as specified in the MetricValue parameter in the MetricTranformations section. The CloudWatch alarm associated with this metric filter will be triggered when the metric value crosses a specified threshold.

You can create additional metric filters in CloudWatch Logs to provide better visibility into the SSH activity on your servers. Let’s assume you want to know if there are too many attempts to SSH into your servers with bad private keys. If an attempt is made with a bad private key, a line like the following is logged in the SSH log file.

You can produce this log line by modifying the pem file you are using (a pem file holds your private key). In a terminal window, modify your private key by copying and pasting the following lines in the same directory where your key resides.

These lines simply change the characters at positions 25 and 26 from their current value to the character A, keeping the original pem file intact. Alternatively, you can use nano <valid-keys>.pem from the command line or any other editor, change a character, save the file as bad-keys.pem, and exit the file.

Now, try to use bad-keys.pem to access one of the application servers.

Now, let’s look at the server’s ssh.log file from the CloudWatch Logs console and analyze the error log messages. I need to understand the log format in order to configure a new filter. To review the logs, choose Logs in the navigation pane, and select the log group that was created by CloudFormation (it starts with the name you specified when launching the CloudFormation template).

In particular, notice the following line when you try to SSH with a bad private key.

Let’s add a metric filter to capture this line so that we can use this metric later when we build an SSH Dashboard. Copy the following line to the Filter events search box at the top of the console screen and press Enter.

[Mon, day, timestamp, ip, id, msg1= Connection, msg2 = closed, ...]

You can now see only the lines that match the pattern you specified. These are the lines you want to count and transform into metrics. Each string in the message is represented by a word in the filter. In our example, we are looking for a pattern where the sixth word is Connection and the seventh word is closed. Other words in the log line are not important in this context. The following image depicts the mapping between a string in a log file and a metric filter.

To create the metric filter, choose Logs in the navigation pane of the CloudWatch console. Choose the log groups to which you want to apply the new metric filter and then choose Create Metric Filter. Choose Next.

Paste the filter pattern we used previously (the sixth word equals Connection and the seventh word equals closed) in the Filter Pattern box. Select the server you tried to sign in to with the bad private key to Select Log Data to Test and click Test Pattern. You should see the results that are shown in the following screenshot. When completed, click Assign Metric.

Type SSH for the Metric Namespace and sshClosedConnection-InvalidKeysFilter for Filter Name. Choose Create Filter to see your new metric filter listed. You can use the newly created metric filter to graph the metrics and set alarms. The alarms can be used to inform your administrator via email of any event you specify. In addition, metrics can be used to generate SNS notification to trigger an AWS Lambda function in order to take proactive actions, such as blocking suspicious IP addresses in a security group.

Choose Create Alarm next to Filter Name and follow the instructions to create a CloudWatch alarm.

Back at the Metrics view, you should now have three SSH metric filters under Custom Namespaces. Note that it can take a few minutes for the number of SSH metrics to update from two to three.

5. Create a graph by using a CloudWatch dashboard

After you have configured the metrics, you can display SSH metrics in a graph. CloudWatch dashboards allow you to create reusable graphs of AWS resources and custom metrics so that you can quickly monitor your operational status and identify issues at a glance. Metrics data is kept for a period of two weeks.

In the CloudWatch console, choose Dashboards in the navigation pane, and then choose Create dashboard to create a new graph in a dashboard. Name your dashboard SSH-Dashboard and choose Create dashboard. Choose Line Graph from the graph options and choose Configure.

In the Add metric graph window under Custom Namespace, choose SSH > Metrics with no dimensions. Select all three metrics you have configured (the CloudFormation template configured two metrics and you manually added one more metric).

By default, the metrics are displayed on the graph as an average. However, you configured metrics that are based on summary metrics (for example, the total number of alarms in two minutes). To change the default, choose the Graphed metrics tab, and change the statistic from Average to Sum, as shown in the following screenshot. Also, change the time period from 5 minutes to 1 minute.

Your graphed metrics should look like the following screenshot. When you have provided all the necessary information, choose Create Widget.

You can rename the graph and add static text to give the console more context. To add a text widget, choose Widget and select text. Then edit the widget with markdown language. Your dashboard may then look like the following screenshot.

The consolidated metrics graph displays the number of SSH attempts with bad private keys, invalid user names, and too many disconnects.

Conclusion

In this blog post, I demonstrated how to automate the deployment of the CloudWatch Logs agent, create filters and alarms, and write, test, and apply metrics on the fly from the AWS Management Console. You can then visualize the metrics with the AWS Management Console. The solution described in this post gives you monitoring and alarming capabilities that can help you understand the status of and potential risks to your instances and applications. You can easily aggregate logs from many servers and different applications, create alarms, and display logs’ metrics on a graph.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about the solution in this post, start a new thread on the CloudWatch forum.

In this blog post, I demonstrate the step-by-step process for end-to-end account creation in Organizations as well as how to automate the entire process. I also show how to move a new account into an organizational unit (OU).

Process overview

The following process flow diagram illustrates the steps required to create an account, configure the account, and then move it into an OU so that the account can take advantage of the centralized SCP functionality in Organizations. The tasks in the blue nodes occur in the master account in the organization in question, and the task in the orange node occurs in the new member account I create. In this post, I provide a script (in both Bash/CLI and Python) that you can use to automate this account creation process, and I walk through each step shown in the diagram to explain the process in detail. For the purposes of this post, I use the AWS CLI in combination with CloudFormation to create and configure an account.

The account creation process

Follow the steps in this section to create an account, configure it, and move it into an OU. I am also providing a script and CloudFormation templates that you can use to automate the entire process.

1. Sign in to AWS Organizations

In order to create an account, you must sign in to your organization’s master account with a minimum of the following permissions:

organizations:DescribeOrganization

organizations:CreateAccount

2. Create a new member account

After signing in to your organization’s master account, create a new member account. Before you can create the member account, you need three pieces of information:

An account name – The friendly name of the member account, which you can find on the Accounts tab in the master account.

An email address – The email address of the owner of the new member account. This email address is used by AWS when we need to contact the account owner.

An IAM role name – The name of an IAM role that Organizations automatically preconfigures in the new member account. This role trusts the master account, allowing users in the master account to assume the role, as permitted by the master account administrator. The role also has administrator permissions in the new member account. If you do not change the role’s name, the name defaults to OrganizationAccountAccessRole.

To explain the placeholder values in the preceding command that you must update with your own values:

newAccEmail – The email address of the owner of the new member account. This email address must not already be associated with another AWS account.

newAccName – The friendly name of the new member account.

roleName – The name of an IAM role that Organizations automatically preconfigures in the new member account. The default name is OrganizationAccountAccessRole.

This CLI command returns a request_id that uniquely identifies the request, a value that is required for in Step 3.

Important: When you create an account using Organizations, you currently cannot remove this account from your organization. This, in turn, can prevent you from later deleting the organization.

3. Verify account creation

Account creation may take a few seconds to complete, so before doing anything with the newly created account, you must first verify the account creation status. To check the status, you must have at least the following permission:

organizations:DescribeCreateAccountStatus

The following CLI command, with the request_id returned in the previous step as an input parameter, verifies that the account was created:

The command returns the state of your account creation request and can have three different values: IN_PROGRESS, SUCCEEDED, and FAILED.

4. Assume a role

After you have verified that the new account has been created, configure the account. In order to configure the newly created account, you must sign in with a user who has permission to assume the role submitted in the createAccount API call. In the example in Step 1, I named the role OrganizationAccountAccessRole; however, if you revised the name of the role, you must use that revised name when assuming the role. Note that when an account is created from within an organization, cross-account trust between the master and programmatically created accounts is automatically established.

5. Configure the new account

After you assume the role, build the new account’s networking, IAM, and governance resources as explained in this section. Again, to learn more about and download the account creation script and the templates that can automate this process, see “Automating the entire end-to-end process” later in this post.

Create AWS Config rules to help manage and enforce standards for resources deployed on AWS.

Develop a tagging strategy that specifies a minimum set of tags required on every taggable resource. A tagging rule checks that all resources created or edited fulfill this requirement. A noncompliance report is created to document resources that do not meet the AWS Config rule. AWS Lambda scripts can also be launched as a result of AWS Config rules.

6. Move the new account into an OU

Before allowing your development teams to access the new member account that you configured in the previous steps, apply an SCP to the account to limit the API calls that can be made by all users. To do this, you must move the member account into an OU that has an SCP attached to it.

An OU is a container for accounts. It can contain other OUs, allowing you to create a hierarchy that resembles an upside-down tree with a “root” at the top and OU “branches” that reach down, ending with accounts that are the “leaves” of the tree. When you attach a policy to one of the nodes in the hierarchy, it affects all the branches (OUs) and leaves (accounts) under it. An OU can have exactly one parent, and currently, each account can be a member of exactly one OU.

To explain the placeholder values in the preceding command that you must update with your own values:

account_id – The unique identifier (ID) of the account you want to move.

source_parent_id – The unique ID of the root or OU from which you want to move the account.

destination_parent_id – The unique ID of the root or OU to which you want to move the account.

7. Reduce the IAM role permissions

The OrganizationAccountAccessRole is created with full administrative permissions to enable the creation and development of the new member account. After you complete the development process and you have moved the member account into an OU, reduce the permissions of OrganizationAccountAccessRole to match your anticipated use of this role going forward.

Automating the entire end-to-end process

To help you fully automate the process of creating new member accounts, setting up those accounts, and moving new member accounts into an OU, I am providing a script in both Bash/CLI and Python. You can modify or call additional CloudFormation templates as needed.

Download the script and CloudFormation templates

Download the script and CloudFormation templates to help you automate this end-to-end process. The global variables in the script are set in the opening lines of code. Update these variables’ values, and they will flow as input parameters to the API commands when the script is executed. I have prepopulated the roleName by using AWS best practices nomenclature, but you can use a custom name.

I am including the following descriptions of the elements of the script to give you a better idea of how the script works.

Bash/CLI:

Organization-new-acc.sh – An example shell script that includes parameters, account creation, and a call to the JSON sample templates for each of three subtasks in Step 5 earlier in this post.

CF-VPC.json – An example Cloud Formation template that creates and configures a VPC in the new member account. Each AWS account must have at least one VPC as a networking construct where you can deploy customer resources. Though AWS does create a default VPC when an account is created, you must configure that VPC to meet your needs. This includes creating subnets with specific IP Classless Inter-Domain Routing (CIDR) blocks, creating gateways (including an Internet gateway, a customer gateway, a VPN tunnel, AWS Storage Gateway, Amazon API Gateway, and a NAT gateway), and VPC peering connections. Web ACLs are also part of this process to limit the source IP addresses and ports that can access the VPC. The VPC created by this script includes four subnets across two Availability Zones. Two of the subnets are public and two are private.

CF-IAM.json – An example Cloud Formation template that creates IAM roles in the new member account. As part of a security baseline, you should develop a standard set of IAM roles and related policies. Update this template with the IAM role definitions and policies you want to create in the member account to controls privilege and access.

CF-ConfigRules.json – An example Cloud Formation template that creates an AWS Config rule to enforce tagging standards on resources created in the new account.

Organization_Output.docx – Example output of the results from running Organization-new-acc.sh.

Python:

Create_account_with_iam.py – An example Python template that creates an account, moves it into an OU, applies an SCP, and then calls additional templates to deploy resources. CF-VPC.JSON can be called by this template if you first customize the .json file.

Summary

In this post, I have demonstrated the step-by-step process for end-to-end account creation in Organizations as well as how to automate the entire process. I also showed how to move a new account into an OU. This solution should save you some time and help you avoid common issues that tend to crop up in the manual account-creation process. To learn more about the features of Organizations, see the AWS Organizations User Guide. For more information about the APIs used in this post, see the Organizations API Reference.

If you have comments about this blog post, submit them in the “Comments” section below. If you have implementation or troubleshooting questions, start a new thread on the Organizations forum.

In Amazon Cloud Directory, it’s often necessary to add new objects or add relationships between new objects and existing objects to reflect changes in a real-world hierarchy. With Cloud Directory, you can make these changes efficiently by using batch references within batch operations.

Let’s say I want to take an existing child object in a hierarchy, detach it from its parent, and reattach it to another part of the hierarchy. A simple way to do this would be to make a call to get the object’s unique identifier, another call to detach the object from its parent using the unique identifier, and a third call to attach it to a new parent. However, if I use batch references within a batch write operation, I can perform all three of these actions in the same request, greatly simplifying my code and reducing the round trips required to make such changes.

In this post, I demonstrate how to use batch references in a single write request to simplify adding and restructuring a Cloud Directory hierarchy. I have used the AWS SDK for Java for all the sample code in this post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch references

In my previous post, I demonstrated how to add AnyCompany’s North American warehouses to a global network of warehouses. As time passes and demand grows, AnyCompany launches multiple warehouses in North American cities to fulfill customer orders with continued efficiency. This requires the company to restructure the network to group warehouses in the same region so that the company can apply similar standards to them, such as delivery times, delivery areas, and types of products sold.

For instance, in the NorthAmerica object (see the following diagram), AnyCompany has launched two new warehouses in the Phoenix (PHX) area: PHX_2 and PHX_3. AnyCompany wants to add these new warehouses to the network and regroup them with existing warehouse PHX_1 under the new node, PHX.

The state of the hierarchy before this regrouping is shown in the following diagram, where I added the NorthAmerica warehouses (also represented as NA in the diagram) to the larger network of AnyCompany’s warehouses.

Adding and grouping new warehouses in the NorthAmerica network

I want to add and group the new warehouses with a single request, and using batch references in a batch write lets me do that. A batch reference is just another way of using object references that you are allowed to define arbitrarily. This allows you to chain operations, which means using the return value from one operation in a subsequent operation within the same batch write request

Let’s say I have a batch write request with two batch operations: operation A and operation B. Both batch operations operate on the same object X. In operation A, I use the object X found at /NorthAmerica/Phoenix, and I assign it to a batch reference that I call referencePhoenix. In operation B, I want to modify the same object X, so I use referencePhoenix as the object reference that points to the same unique object X used in operation A. I also will use the same helper method implementation from my previous post for getBatchCreateOperation. To learn more about batch references, see the ObjectReference documentation.

To add and group the new warehouses, I will take advantage of batch references to sequentially:

Detach PHX_1 from the NA node and maintain a reference to PHX_1.

Create a new child node, PHX, and attach it to the NA node.

Create PHX_2 and PHX_3 nodes for the new warehouses.

Link all three nodes—PHX_1 (using the batch reference), PHX_2, and PHX_3—to the PHX node.

The following code example achieves these changes in a single batch by using references. First, the code sets up a createObjectPHX operation to create the PHX parent object and attach it to the parent NorthAmerica object. It then sets up createObjectPHX_2 and createObjectPHX_3 and attaches these new objects to the new PHX object. The code then sets up a detachObject to detach the current PHX_1 object from its parent and assign it to a batch reference. The last operation uses that same batch reference to attach the PHX_1 object to the newly created PHX object. The code example orders these steps sequentially in a batch write operation.

In the preceding code example, I use the batch reference, referenceToPHX_1, in the same batch write operation because I do not have to know the object identifier of that object. If I couldn’t use such a batch reference, I would have to use separate requests to get the PHX_1 identifier, detach it from the NA node, and then attach it to the new PHX node.

I now have the network configuration I want, as shown in the following diagram. I have used a combination of batch operations with batch references to bring new warehouses into the network and regroup them within the same local group of warehouses.

Summary

In this post, I have shown how you can use batch references in a single batch write request to simplify adding and restructuring your existing hierarchies in Cloud Directory. You can use batch references in scenarios where you want to get an object identifier, but don’t want the overhead of using a read operation before a write operation. Instead, you can use a batch reference to refer to an object as part of the intermediate batch operation. To learn more about batch operations, see Batches, BatchWrite, and BatchRead.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, start a new thread on the Directory Service forum.

Amazon Cloud Directory is a hierarchical data store that enables you to build flexible, cloud-native directories for organizing hierarchies of data along multiple dimensions. For example, you can create an organizational structure that you can navigate through multiple hierarchies for reporting structure, location, and cost center.

In this blog post, I demonstrate how you can use Cloud Directory APIs to write and read multiple objects by using batch operations. With batch write operations, you can execute a sequence of operations atomically—meaning that all of the write operations must occur, or none of them do. You also can make your application efficient by reducing the number of required round trips to read and write objects to your directory. I have used the AWS SDK for Java for all the sample code in this blog post, but you can use other language SDKs or the AWS CLI in a similar way.

Using batch write operations

To demonstrate batch write operations, let’s say that AnyCompany’s warehouses are organized to determine the fastest methods to ship orders to its customers. In North America, AnyCompany plans to open new warehouses regularly so that the company can keep up with customer demand while continuing to meet the delivery times to which they are committed.

The following diagram shows part of AnyCompany’s global network, including Asian and European warehouse networks.

Let’s take a look at how I can use batch write operations to add NorthAmerica to AnyCompany’s global network of warehouses, with the first three warehouses in New York City (NYC), Las Vegas (LAS), and Phoenix (PHX).

Adding NorthAmerica to the global network

To add NorthAmerica to the global network, I can use a batch write operation to create and link all the objects in the existing network.

First, I set up a helper method, which performs repetitive tasks, for the getBatchCreateOperation object. The following lines of code help me create an NA object for NorthAmerica and then attach the three city-related nodes: NYC, LAS, and PHX. Because AnyCompany is planning to grow its network, I add a suffix of _1 to each city code (such as PHX_1), which will be helpful hierarchically when the company adds more warehouses within a city.

warehouseName – The name of the warehouse to create in the getBatchCreateOperation object.

directorySchemaARN – The Amazon Resource Name (ARN) of the schema applied to the directory.

parentReference – The object reference of the parent object.

linkName – The unique child path from the parent reference where the object should be attached.

I then use this helper method to set up multiple create operations for NorthAmerica, NewYork, Phoenix, and LasVegas. For the sake of simplicity, I use airport codes to stand for the cities (for example, NYC stands for NewYork).

Running the preceding code results in a hierarchy for the network with NA added to the network, as shown in the following diagram.

Using batch read operations

Now, let’s say that after I add NorthAmerica to AnyCompany’s global network, an analyst wants to see the updated view of the NorthAmerica warehouse network as well as some information about the newly introduced warehouse configurations for the Phoenix warehouses. To do this, I can use batch read operations to get the network of warehouses for NorthAmerica as well as specifically request the attributes and configurations of the Phoenix warehouses.

To list the children of the NorthAmerica warehouses, I use the BatchListObjectChildren API to get all the children at the path, /NorthAmerica. Next, I want to view the attributes of the Phoenix object, so I use the BatchListObjectAttributes API to read all the attributes of the object at /NorthAmerica/Phoenix, as shown in the following code example.

Exception handling

Batch operations in Cloud Directory might sometimes fail, and it is important to know how to handle such failures, which differ for write operations and read operations.

Batch write operation failures

If a batch write operation fails, Cloud Directory fails the entire batch operation and returns an exception. The exception contains the index of the operation that failed along with the exception type and message. If you see RetryableConflictException, you can try again with exponential backoff. A simple way to do this is to double the amount of time you wait each time you get an exception or failure. For example, if your first batch write operation fails, wait 100 milliseconds and try the request again. If the second request fails, wait 200 milliseconds and try again. If the third request fails, wait 400 milliseconds and try again.

Batch read operation failures

If a batch read operation fails, the response contains either a successful response or an exception response. Individual batch read operation failures do not cause the entire batch read operation to fail—Cloud Directory returns individual success or failure responses for each operation.

Limits of batch operations

Batch operations are still constrained by the same Cloud Directory limits as other Cloud Directory APIs. A single batch operation does not limit the number of operations, but the total number of nodes or objects being written or edited in a single batch operation have enforced limits. For example, a total of 20 objects can be written in a single batch operation request to Cloud Directory, regardless of how many individual operations there are within that batch. Similarly, a total of 200 objects can be read in a single batch operation request to Cloud Directory. For more information, see limits on batch operations.

Summary

In this post, I have demonstrated how you can use batch operations to operate on multiple objects and simplify making complicated changes across hierarchies. In my next post, I will demonstrate how to use batch references within batch write operations. To learn more about batch operations, see Batches, BatchWrite, and BatchRead.

If you have comments about this post, submit them in the “Comments” section below. If you have implementation questions, start a new thread on the Directory Service forum.

Coming soon, AWS will improve the way you sign in to your AWS account. Whether you sign in as your account’s root user or an AWS Identity and Access Management (IAM) user, you will be able to sign in from the AWS Management Console’s home page. This means that if you sign in as an IAM user, you will no longer be required to use an account-specific URL. However, the account-specific URL you use to sign in today will continue to work.

In the new sign-in experience, you can sign in from the homepage using either your root user’s or IAM user’s credentials. In the first step, root users will enter their email address; IAM users will enter their account ID (or account alias). In the second step, root users will enter their password; IAM users will enter their user name and password.

In this blog post, I explain the improvements that are coming soon to the way you sign in to your AWS account as a root user or IAM user. If you use a password manager to help you sign in to AWS or if you use saved bookmarks or settings, you may need to make updates so that they will work with the new sign-in experience.

The new sign-in experience

The new AWS sign-in experience will allow both root users and IAM users to sign in using the Sign In to the Console link on the AWS home page.

Step 1: For root users and IAM users

As shown in the following screenshot, to sign in as a root user, you will type the email address associated with the root account. To sign in as an IAM user, you will type an AWS account ID or account alias. You will then choose Next to proceed to Step 2.

If you usually sign in using the same browser and allow the browser to store AWS cookies, you will skip Step 1 on subsequent sign-in attempts. If you regularly switch users or accounts, AWS recommends that you prevent the sign-in page from storing AWS cookies.

Step 2: For root users

If you enter the email address associated with the root account in Step 1, you will be taken to the second step of signing in to the root account, as shown in the following screenshot. Type the password of the root account and choose Sign in. If you enabled multi-factor authentication (MFA) for your root account, you will then be prompted to enter the code from your MFA device. After successful authentication, you will be signed in to the AWS Management Console, and the homepage of your root account will be displayed.

Step 2: For IAM users

If you enter an AWS account ID or account alias in Step 1, you will be taken to the second step for signing in as an IAM user, as shown in the following screenshot. Type the user name and password of the IAM user, and choose Sign in. If MFA has been enabled for your IAM user, you will then be prompted to enter the code from your MFA device. After successful authentication, the IAM user home page will be displayed.

With these changes, you may need to make updates to password managers and bookmarks so that they will work with the new sign-in experience. We will publish another Security Blog post when the updated sign-in experience is available.

If you have comments about the upcoming changes to how your root user and IAM users will sign in to your AWS account, enter a comment in the “Comments” section below. If you have questions, start a new thread on the IAM forum.

With AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, you can now create and enforce custom password policies for your Microsoft Windows users. AWS Microsoft AD now includes five empty password policies that you can edit and apply with standard Microsoft password policy tools such as Active Directory Administrative Center (ADAC). With this capability, you are no longer limited to the default Windows password policy. Now, you can configure even stronger password policies and define lockout policies that specify when to lock out an account after login failures.

In this blog post, I demonstrate how to edit these new password policies to help you meet your security standards by using AWS Microsoft AD. I also introduce the password attributes you can modify and demonstrate how to apply password policies to user groups in your domain.

Prerequisites

The instructions in this post assume that you already have the following components running:

Scenario overview

Let’s say I am the Active Directory (AD) administrator of Example Corp. At Example Corp., we have a group of technical administrators, several groups of senior managers, and general, nontechnical employees. I need to create password policies for these groups that match our security standards.

Our general employees have access only to low-sensitivity information. However, our senior managers regularly access confidential information and we want to enforce password complexity (a mix of upper and lower case letters, numbers, and special characters) to reduce the risk of data theft. For our administrators, we want to enforce password complexity policies to prevent unauthorized access to our system administration tools.

Our security standards call for the following enforced password and account lockout policies:

General employees – To make it easier for nontechnical general employees to remember their passwords, we do not enforce password complexity. However, we want to enforce a minimum password length of 8 characters and a lockout policy after 6 failed login attempts as a minimum bar to protect against unwanted access to our low-sensitivity information. If a general employee forgets their password and becomes locked out, we let them try again in 5 minutes, rather than require escalated password resets. We also want general employees to rotate their passwords every 60 days with no duplicated passwords in the past 10 password changes.

Senior managers – For senior managers, we enforce a minimum password length of 10 characters and require password complexity. An account lockout is enforced after 6 failed attempts with an account lockout duration of 15 minutes. Senior managers must rotate their passwords every 45 days, and they cannot duplicate passwords from the past 20 changes.

Administrators – For administrators, we enforce password complexity with a minimum password length of 15 characters. We also want to lock out accounts after 6 failed attempts, have password rotation every 30 days, and disallow duplicate passwords in the past 30 changes. When a lockout occurs, we require a special administrator to intervene and unlock the account so that we can be aware of any potential hacking.

Fine-Grained Password Policy administrators – To ensure that only trusted administrators unlock accounts, we have two special administrator accounts (admin and midas) that can unlock accounts. These two accounts have the same policy as the other administrators except they have an account lockout duration of 15 minutes, rather than requiring a password reset. These two accounts are also the accounts used to manage Example Corp.’s password policies.

The following table summarizes how I edit each of the four policies I intend to use.

Policy name

EXAMPLE-PSO-01

EXAMPLE-PSO-02

EXAMPLE-PSO-03

EXAMPLE-PSO-05

Precedence

10

20

30

50

User group

Fine-Grained Password Policy Administrators

Other Administrators

Senior Managers

General Employees

Minimum password length

15

15

10

8

Password complexity

Enable

Enable

Enable

Disable

Maximum password age

30 days

30 days

45 days

60 days

Account complexity

Enable

Enable

Enable

Disable

Number of failed logon attempts allowed

6

6

6

6

Duration

15 minutes

Not applicable

15 minutes

5 minutes

Password history

24

30

20

10

Until admin manually unlocks account

Not applicable

Selected

Not applicable

Not applicable

To implement these password policies, I use 4 of the 5 new password policies available in AWS Microsoft AD:

I first explain how to configure the password policies.

I then demonstrate how to apply the four password policies that match Example Corp.’s security standards for these user groups.

1. Configure password policies in AWS Microsoft AD

To help you get started with password policies, AWS has added the Fine-Grained Pwd Policy Admins AD security group to your AWS Microsoft AD directory. Any user or other security group that is part of the Fine-Grained Pwd Policy Admins group has permissions to edit and apply the five new password policies. By default, your directory Admin is part of the new group and can add other users or groups to this group.

Adding users to the Fine-Grained Pwd Policy Admins user group

Follow these steps to add more users or AD security groups to the Fine-Grained Pwd Policy Admins security group so that they can administer fine-grained password policies:

Launch ADAC from your managed instance.

Switch to the Tree View and navigate to CORP > Users.

Find the Fine Grained Pwd Policy Admins user group. Add any users or groups in your domain to this group.

Edit password policies

To edit fine-grained password policies, open ADAC from any management instance joined to your domain. Switch to the Tree View and navigate to System > Password Settings Container. You will see the five policies containing the string -PSO- that AWS added to your directory, as shown in the following screenshot. Select a policy to edit it.

After editing the password policy, apply the policy by adding users or AD security groups to these policies by choosing Add. The default domain GPO applies if you do not configure any of the five password policies. For additional details about using Password Settings Container, go to Step-by-Step: Enabling and Using Fine-Grained Password Policies in AD on the Microsoft TechNet Blog.

The password attributes you can edit

AWS allows you to edit all of the password attributes except Precedence (I explain more about Precedence in the next section). These attributes include:

Password history

Minimum password length

Minimum password age

Maximum password age

Store password using reversible encryption

Password must meet complexity requirements

You also can enforce the following attributes for account lockout settings:

Understanding password policy precedence

AD password policies have a precedence (a numerical attribute that AD uses to determine the resultant policy) associated with them. Policies with a lower value for Precedence have higher priority than other policies. A user inherits all policies that you apply directly to the user or to any groups to which the user belongs. For example, suppose jsmith is a member of the HR group and also a member of the MANAGERS group. If I apply a policy with a Precedence of 50 to the HR group and a policy with a Precedence of 40 to MANAGERS, the policy with the Precedence value of 40 ranks higher and AD applies that policy to jsmith.

If you apply multiple policies to a user or group, the resultant policy is determined as follows by AD:

If you apply a policy directly to a user, AD enforces the lowest directly applied password policy.

If you did not apply a policy directly to the user, AD enforces the policy with the lowest Precedence value of all policies inherited by the user through the user’s group membership.

2. Apply password policies to user groups

In this section, I demonstrate how to apply Example Corp.’s password policies. Except in rare cases, I only apply policies by group membership, which ensures that AD does not enforce a lower priority policy on an individual user if have I added them to a group with a higher priority policy.

Because my directory is new, I use a Remote Desktop Protocol (RDP) connection to sign in to the Windows Server instance I domain joined to my AWS Microsoft AD directory. Signing in with the admin account, I launch ADAC to perform the following tasks:

First, I set up my groups so that I can apply password policies to them. Later, I can create user accounts and add them to my groups and AD applies the right policy by using the policy precedence and resultant policy algorithms I discussed previously. I start by adding the two special administrative accounts (admin and midas) that I described previously to the Fine-Grained Pwd Policy Admins. Because AWS Microsoft AD adds my default admin account to Fine-Grained Pwd Policy Admins, I only need to create midas and then add midas to the Fine-Grained Pwd Policy Admins group.

Next, I create the Other Administrators, Senior Managers, and General Employees groups that I described previously, as shown in the following screenshot.

For this post’s example, I use these four policies:

EXAMPLE-PSO-01 (highest priority policy) – For the administrators who manage Example Corp.’s password policies. Applying this highest priority policy to the Fine-Grained Pwd Policy Admins group prevents these users from being locked out if they also are assigned to a different policy.

EXAMPLE-PSO-02 (the second highest priority policy) – For Example Corp.’s other administrators.

This leaves me one password policy (EXAMPLE-PSO-04) that I can use for in the future if needed.

I start by editing the policy, EXAMPLE-PSO-01. To edit the policy, I follow the Edit password policies section from earlier in this post. When finished, I add the Fine-Grained Pwd Policy Admins group to that policy, as shown in the following screenshot. I then repeat the process for each of the remaining policies, as described in the Scenario overview section earlier in this post.

Though AD enforces new password policies, the timing related to how password policies replicate in the directory, the types of attributes that are changed, and the timing of user password changes can cause variability in the immediacy of policy enforcement. In general, after the policies are replicated throughout the directory, attributes that affect account lockout and password age take effect. Attributes that affect the quality of a password, such as password length, take effect when the password is changed. If the password age for a user is in compliance, but their password strength is out of compliance, the user is not forced to change their password. For more information password policy impact, see this Microsoft TechNet article.

Summary

In this post, I have demonstrated how you can configure strong password policies to meet your security standards by using AWS Microsoft AD. To learn more about AWS Microsoft AD, see the AWS Directory Service home page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, start a new thread on the Directory Service forum.

You can now increase the redundancy and performance of your AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD, directory by deploying additional domain controllers. Adding domain controllers increases redundancy, resulting in even greater resilience and higher availability. This new capability enables you to have at least two domain controllers operating, even if an Availability Zone were to be temporarily unavailable. The additional domain controllers also improve the performance of your applications by enabling directory clients to load-balance their requests across a larger number of domain controllers. For example, AWS Microsoft AD enables you to use larger fleets of Amazon EC2 instances to run .NET applications that perform frequent user attribute lookups.

AWS Microsoft AD is a highly available, managed Active Directory built on actual Microsoft Windows Server 2012 R2 in the AWS Cloud. When you create your AWS Microsoft AD directory, AWS deploys two domain controllers that are exclusively yours in separate Availability Zones for high availability. Now, you can deploy additional domain controllers easily via the Directory Service console or API, by specifying the total number of domain controllers that you want.

AWS Microsoft AD distributes the additional domain controllers across the Availability Zones and subnets within the Amazon VPC where your directory is running. AWS deploys the domain controllers, configures them to replicate directory changes, monitors for and repairs any issues, performs daily snapshots, and updates the domain controllers with patches. This reduces the effort and complexity of creating and managing your own domain controllers in the AWS Cloud.

In this blog post, I create an AWS Microsoft AD directory with two domain controllers in each Availability Zone. This ensures that I always have at least two domain controllers operating, even if an entire Availability Zone were to be temporarily unavailable. To accomplish this, first I create an AWS Microsoft AD directory with one domain controller per Availability Zone, and then I deploy one additional domain controller per Availability Zone.

Solution architecture

The following diagram shows how AWS Microsoft AD deploys all the domain controllers in this solution after you complete Steps 1 and 2. In Step 1, AWS Microsoft AD deploys the two required domain controllers across multiple Availability Zones and subnets in an Amazon VPC. In Step 2, AWS Microsoft AD deploys one additional domain controller per Availability Zone and subnet.

Step 1: Create an AWS Microsoft AD directory

First, I create an AWS Microsoft AD directory in an Amazon VPC. I can add domain controllers only after AWS Microsoft AD configures my first two required domain controllers. In my example, my domain name is example.com.

When I create my directory, I must choose the VPC in which to deploy my directory (as shown in the following screenshot). Optionally, I can choose the subnets in which to deploy my domain controllers, and AWS Microsoft AD ensures I select subnets from different Availability Zones. In this case, I have no subnet preference, so I choose No Preference from the Subnets drop-down list. In this configuration, AWS Microsoft AD selects subnets from two different Availability Zones to deploy the directory.

I then choose Next Step to review my configuration, and then choose Create Microsoft AD. It takes approximately 40 minutes for my domain controllers to be created. I can check the status from the AWS Directory Service console, and when the status is Active, I can add my two additional domain controllers to the directory.

Step 2: Deploy two more domain controllers in the directory

Now that I have created an AWS Microsoft AD directory and it is active, I can deploy two additional domain controllers in the directory. AWS Microsoft AD enables me to add domain controllers through the Directory Service console or API. In this post, I use the console.

To deploy two more domain controllers in the directory:

I open the AWS Management Console, choose Directory Service, and then choose the Microsoft AD Directory ID. In my example, my recently created directory is example.com, as shown in the following screenshot.

I choose the Domain controllers tab next. Here I can see the two domain controllers that AWS Microsoft AD created for me in Step 1. It also shows the Availability Zones and subnets in which AWS Microsoft AD deployed the domain controllers.

I then choose Modify on the Domain controllers tab. I specify the total number of domain controllers I want by choosing the subtract and add buttons. In my example, I want four domain controllers in total for my directory.

I choose Apply. AWS Microsoft AD deploys the two additional domain controllers and distributes them evenly across the Availability Zones and subnets in my Amazon VPC. Within a few seconds, I can see the Availability Zones and subnets in which AWS Microsoft AD deployed my two additional domain controllers with a status of Creating (see the following screenshot). While AWS Microsoft AD deploys the additional domain controllers, my directory continues to operate by using the active domain controllers—with no disruption of service.

When AWS Microsoft AD completes the deployment steps, all domain controllers are in Active status and available for use by my applications. As a result, I have improved the redundancy and performance of my directory.

Note: After deploying additional domain controllers, I can reduce the number of domain controllers by repeating the modification steps with a lower number of total domain controllers. Unless a directory is deleted, AWS Microsoft AD does not allow fewer than two domain controllers per directory in order to deliver fault tolerance and high availability.

Summary

In this blog post, I demonstrated how to deploy additional domain controllers in your AWS Microsoft AD directory. By adding domain controllers, you increase the redundancy and performance of your directory, which makes it easier for you to migrate and run mission-critical Active Directory–integrated workloads in the AWS Cloud without having to deploy and maintain your own AD infrastructure.

I am an AWS Professional Services consultant, which has me working directly with AWS customers on a daily basis. One of my customers recently asked me to provide a solution to help them fulfill their security requirements by having the flow log data from VPC Flow Logs sent to a central AWS account. This is a common requirement in some companies so that logs can be available for in-depth analysis. In addition, my customers regularly request a simple, scalable, and serverless solution that doesn’t require them to create and maintain custom code.

In this blog post, I demonstrate how to configure your AWS accounts to send flow log data from VPC Flow Logs to an Amazon S3 bucket located in a central AWS account by using only fully managed AWS services. The benefit of using fully managed services is that you can lower or even completely eliminate operational costs because AWS manages the resources and scales the resources automatically.

Solution overview

The solution in this post uses VPC Flow Logs, which is configured in a source account to send flow logs to an Amazon CloudWatch Logs log group. To receive the logs from multiple accounts, this solution uses a CloudWatch Logs destination in the central account. Finally, the solution utilizes fully managed Amazon Kinesis Firehose, which delivers streaming data to scalable and durable S3 object storage automatically without the need to write custom applications or manage resources. When the logs are processed and stored in an S3 bucket, these can be tiered into a lower cost, long-term storage solution (such as Amazon Glacier) automatically to help meet any company-specific or industry-specific requirements for data retention.

The following diagram illustrates the process and components of the solution described in this post.

As numbered in the preceding diagram, these are the high-level steps for implementing this solution:

Finally, in the source accounts, set a subscription filter on the CloudWatch Logs log group to send data to the CloudWatch Logs destination.

Configure the solution by using AWS CloudFormation and the AWS CLI

Now that I have explained the solution, its benefits, and the components involved, I will show how to configure a source account by using the AWS CLI and the central account using a CloudFormation template. To implement this, you need two separate AWS accounts. If you need to set up a new account, navigate to the AWS home page, choose Create an AWS Account, and follow the instructions. Alternatively, you can use AWS Organizations to create your accounts. See AWS Organizations – Policy-Based Management for Multiple AWS Accounts for more details and some step-by-step instructions about how to use this service.

Note your source and target account IDs, source account VPC ID number, and target account region where you create your resources with the CloudFormation template. You will need these values as part of implementing this solution.

Gather the required configuration information

To find your AWS account ID number in the AWS Management Console, choose Support in the navigation bar, and then choose Support Center. Your currently signed-in, 12-digit account ID number appears in the upper-right corner under the Support menu. Sign in to both the source and target accounts and take note of their respective account numbers.

To find your VPC ID number in the AWS Management Console, sign in to your source account and choose Services. In the search box, type VPC and then choose VPC Isolated Cloud Resources.

In the VPC console, click Your VPCs, as shown in the following screenshot.

Note your VPC ID.

Finally, take a note of the region in which you are creating your resources in the target account. The region name is shown in the region selector in the navigation bar of the AWS Management Console (see the following screenshot), and the region is shown in your browser’s address bar. Sign in to your target account, choose Services, and type CloudFormation. In the following screenshot, the region name is Ireland and the region is eu-west-1.

Create the resources in the central account

In this example, I use a CloudFormation template to create all related resources in the central account. You can use CloudFormation to create and manage a collection of AWS resources called a stack. CloudFormation also takes care of the resource provisioning for you.

The provided CloudFormation template creates an S3 bucket that will store all the VPC Flow Logs, IAM roles with associated IAM policies used by Kinesis Firehose and the CloudWatch Logs destination, a Kinesis Firehose delivery stream, and a CloudWatch Logs destination.

To configure the resources in the central account:

Sign in to the AWS Management Console, navigate to CloudFormation, and then choose Create Stack. On the Select Template page, type the URL of the CloudFormation template (https://s3.amazonaws.com/awsiammedia/public/sample/VPCFlowLogsCentralAccount/targetaccount.template) for the target account and choose Next.

The CloudFormation template defines a parameter, paramDestinationPolicy, which sets the IAM policy on the CloudWatch Logs destination. This policy governs which AWS accounts can create subscription filters against this destination.

Change the Principal to your SourceAccountID, and in the Resource section, change TargetAccountRegion and TargetAccountID to the values you noted in the previous section.

After you have updated the policy, choose Next. On the Options page, choose Next.

On the Review page, scroll down and choose the I acknowledge that AWS CloudFormation might create IAM resources with custom names check box. Finally, choose Create to create the stack.

After you are done creating the stack, verify the status of the resources. Choose the check box next to the Stack Name and choose the Resources tab, where you can see a list of all the resources created with this template and their status.

This completes the configuration of the central account.

Configure a source account

Now that you have created all the necessary resources in the central account, I will show you how to configure a source account by using the AWS CLI, a tool for managing your AWS resources. For more information about installing and configuring the AWS CLI, see Configuring the AWS CLI.

To configure a source account:

Create the IAM role that will grant VPC Flow Logs the permissions to send data to the CloudWatch Logs log group. To start, create a trust policy in a file named TrustPolicyForCWL.json by using the following policy document.

Now, create a permissions policy to define which actions VPC Flow Logs can perform on the source account. Start by creating the permissions policy in a file named PermissionsForVPCFlowLogs.json. The following set of permissions (in the Action element) is the minimum requirement for VPC Flow Logs to be able to send data to the CloudWatch Logs log group.

Finally, change the destination arn in the following command to reflect your targetaccountregion and targetaccountID, and run the command to subscribe your CloudWatch Logs log group to CloudWatch Logs in the central account.

This completes the configuration of this central logging solution. Within a few minutes, you should start seeing compressed logs sent from your source account to the S3 bucket in your central account (see the following screenshot). You can process and analyze these logs with the analytics tool of your choice. In addition, you can tier the data into a lower cost, long-term storage solution such as Amazon Glacier and archive the data to meet the data retention requirements of your company.

Summary

In this post, I demonstrated how to configure AWS accounts to send VPC Flow Logs to an S3 bucket located in a central AWS account by using only fully managed AWS services. If you have comments about this post, submit them in the “Comments” section below. If you have questions about implementing this solution, start a new thread on the CloudWatch forum.

After you create your first AWS account, you might be tempted to start immediately addressing the issue that brought you to AWS. For example, you might set up your first website, spin up a virtual server, or create your first storage solution. However, AWS recommends that first, you follow some security best practices to help protect your AWS resources. In this blog post, I explain why you should follow AWS security best practices, and I link to additional resources so that you can learn more about each best practice.

Best practices to help secure your AWS resources

When you created an AWS account, you specified an email address and password you use to sign in to the AWS Management Console. When you sign in using these credentials, you are accessing the console by using your root account. Following security best practices can help prevent your root account from being compromised, which is an important safeguard because your root account has access to all services and resources in your account.

Create a strong password for your AWS resources

To help ensure that you protect your AWS resources, first set a strong password with a combination of letters, numbers, and special characters. For more information about password policies and strong passwords, see Setting an Account Password Policy for IAM Users. This also might be a good opportunity to use a third-party password management tool, which you can use to not only create strong passwords but also share those credentials securely with other members of your organization.

Use a group email alias with your AWS account

If for any reason you are unavailable to respond to an AWS notification or manage your AWS Cloud workloads, using a group email alias with your AWS account means other trusted members of your organization can manage the account in your absence. To update the email address used with your account, see Managing an AWS Account.

Enable multi-factor authentication

Multi-factor authentication (MFA) is a security capability that provides an additional layer of authentication on top of your user name and password. When using MFA, after you sign in with your user name and password (what you know), you must also provide an additional piece of information that only you have physical access to (what you have), which can come from a dedicated MFA hardware device or an app on a phone.

Set up AWS IAM users, groups, and roles for daily account access

To manage and control access and permissions to your AWS resources, use AWS Identity and Access Management (IAM) to create users, groups, and roles. When you create an IAM user, group, or role, it can access only the AWS resources to which you explicitly grant permissions, which is also known as least privilege.

Delete your account’s access keys

You can allow programmatic access to your AWS resources from the command line or for use with AWS APIs. However, AWS recommends that you do not create or use the access keys associated with your root account for programmatic access. In fact, if you still have access keys, delete them. Instead, create an IAM user and grant that user only the permissions needed for the APIs you are planning to call. You can then use that IAM user to issue access keys. To learn more, see Managing Access Keys for Your AWS Account.

Tags

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.