You can now configure CloudFront to add custom headers or override the value of existing request headers when CloudFront forwards requests to your origin. You can use these headers to help validate that requests made to your origin were sent from CloudFront (shared secret) and configure your origin to only allow requests that contain the custom header values that you specify. This feature also helps with setting up Cross-Origin Request Sharing (CORS) for your CloudFront distribution - you can configure CloudFront to always add custom headers to your origin to accommodate viewers that don't automatically include those headers in requests. It also allows you to disable varying on the Origin header, which improves your cache hit ratio, yet forward the appropriate headers so that your origin can respond with the CORS header. For more information, see Forwarding Custom Headers to Your Origin in the Amazon CloudFront Developer Guide.

You can now launch D2 instances, the latest generation of Amazon EC2 Dense-storage instances. D2 instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications.

You can now use two additional parameters for running Docker-enabled applications on Amazon EC2 Container Service (ECS). You can set the minimumHealthyPercent parameter to provide a lower limit on the number of running tasks during a deployment, enabling you to deploy without using additional cluster capacity. You can also set the maximumPercent parameter to provide an upper limit on the number of running tasks during a deployment, enabling you to define the deployment batch size. To learn more about these deployment options, see the documentation.

You can now use additional Amazon CloudWatch metrics published by Amazon ECS for monitoring the reserved amount of CPU and memory resources used by running tasks in your clusters. You can create a CloudWatch alarm using these metrics that will add more Amazon EC2 instances to the Auto Scaling group when a cluster’s available capacity drops below a threshold you define. For more information on how to scale using CloudWatch alarms and Auto Scaling groups, see the documentation.

Amazon EC2 Container Registry (ECR) is now available to all customers.

Amazon ECR is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository. With Amazon ECR, there are no upfront fees or commitments. You pay only for the amount of data you store in your repositories and data transferred to the Internet.

Click here to learn how to get started with Amazon EC2 Container Registry. Please visit our product page for more information about Amazon EC2 Container Registry.

You can now export API definitions for Amazon API Gateway. Through a new API you can export your API configurations as a Swagger file from the API Gateway Management Console. You can export Swagger definitions including the API Gateway integration tags as well as POSTMAN annotations. Read our documentation to learn more about this feature.

You can now get deeper visibility into the health of your Amazon RDS instances in real time with Enhanced Monitoring for Amazon RDS. It provides a comprehensive set of over 50 new system metrics and aggregated process information for your instances, at granularity of up to 1 second. You can visualize the metrics on the RDS console, and also integrate them with CloudWatch and third-party applications.

AWS Config Rules allows you to create rules that continuously check the configuration of relevant AWS resources recorded by AWS Config, and notifies when resources do not comply with these guidelines. Using the rules dashboard, you can track overall compliance status and troubleshoot specific resource configurations that do not comply. The preview of Config Rules was announced on Oct 7, 2015 at AWS re:invent.

You can now use Network Address Translation (NAT) Gateway, a highly available AWS managed service that makes it easy to connect to the Internet from instances within a private subnet in an AWS Virtual Private Cloud (VPC). Previously, you needed to launch a NAT instance to enable NAT for instances in a private subnet.

Turn on a Trail across all regions: You can now turn on a trail across all regions for your AWS account. CloudTrail will deliver log files from all regions to the Amazon S3 bucket and an optional CloudWatch Logs log group you specified. Additionally, when AWS launches a new region, CloudTrail will create the same trail in the new region. As a result, you will receive log files containing API activity for the new region without taking any action. Using the CloudTrail console, you can specify that a trail applies to all regions. For more details, refer to the Applying a trail to all regions section of the CloudTrail FAQ.

We are excited to announce that we’ve raised the limit of Security Groups from 100 to 500 in your Virtual Private Cloud (VPC). A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. With 500 security groups you can now increase the granularity of your security profile for your EC2 instances and further customize your virtual network in the AWS cloud.

AWS Marketplace, which lists popular 3rd party software for sale by independent software vendors, has announced support for Clusters and AWS Resources. Using AWS CloudFormation, these features make deploying software from AWS Marketplace faster and easier than ever before, from testing to production.

Amazon CloudFront can now compress your objects at the edge. You can now configure CloudFront to automatically apply GZIP compression when browsers and other clients request a compressed object with text and other compressible file formats. This means that if you are already using Amazon S3, CloudFront can transparently compress this type of content. For origins outside S3, doing compression at the edge means you don’t need to use resources at your origin to do compression. The resulting smaller size of compressed objects makes downloads faster and reduces your CloudFront data transfer charges.

Four new checks have been added to Trusted Advisor to provide guidance related to EBS, CloudFront, and IAM access keys, with two updates released for existing S3 and service limit checks. These checks provide additional guidance to help provision your resources to improve system performance and reliability, increase security, and optimize cost.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

Amazon WorkSpaces Application Manager (WAM) is now available in the EU (Ireland) region and has two new management features. These new features allow you to share application packages with other AWS accounts and delete packages within your AWS account using the Amazon WAM Studio. Package sharing allows you to virtualize an application one time, and share it with many AWS accounts. You can then customize the application configuration for each user to which the application is assigned using Configurable App Events.

Slack is a messaging app that makes it easy for teams to communicate with each other and with their automated systems. AWS Lambda now makes it easy to get started building chat-based DevOps solutions with new console blueprints. These blueprints help you post Amazon CloudWatch alarms and other Amazon SNS messages to your team’s Slack channel as an incoming webhook, where your team can keep track of IT operations and quickly take action. Lambda is also the easiest way to create and run Slack commands. A new blueprint for building Slack commands using Lambda functions makes it quick to automate tasks and expose business logic as chat-based commands. AWS Lambda makes Slack developers more productive by eliminating the need to manage servers, scaling, and deployments when building commands and automated notifications. Both blueprints are available in JavaScript (Node.js) and Python.

We are excited to announce that starting today the Amazon Machine Images (AMI) Copy Service supports copying AMIs from another AWS account within the same AWS region or across regions.The process of copying AMIs from another AWS account is same as copying AMIs within your account. You can use the AWS Management Console, EC2 Command Line Interface, or EC2 API.

Today we are excited to announce that we have added the ability to use EC2 Run Command with your Linux instances. Back in October, we launched EC2 Run Command, a feature that enables you to securely manage the configuration of your Amazon EC2 instances. With the addition of Linux support, you can now easily and consistently administer your EC2 instances regardless of your choice in operating systems.

Starting today, you can launch Amazon EC2 instances with an encrypted Amazon Elastic Block Store (EBS) boot volume, which together with EBS data volume encryption means you can now encrypt all your EBS storage.

You can now launch the t2.nano, the newest Amazon EC2 burstable-performance instance. Starting at only $4.75 per month ($0.0065 per hour), it is the lowest priced Amazon EC2 instance. The t2.nano features 512 MiB of memory and 1 vCPU, making it well suited for workloads that require a consistent baseline performance with the ability to burst. T2 instances are backed by the latest Intel Xeon processors with clock speeds up to 3.3 GHz.

You can now launch Oracle Database Standard Edition Two (SE2) instances under the Bring Your Own License (“BYOL”) licensing model as well as take advantage of Oracle 12c and 11g October 2015 Oracle Patch Set Updates (PSU).

You can now use DynamoDB as the storage backend for the newest version of the Titan graph engine, 1.0.0. Titan 1.0.0 provides support for TinkerPop 3, as well as several bug fixes and optimizations. Tinkerpop 3 is the latest version of the Tinkerpop graph computing framework. For full list of updates in Titan 1.0.0 please see the release notes.

You can now enhance the security and visibility of Amazon Machine Learning API calls with AWS CloudTrail, to deliver log files containing records of your API calls. You can use AWS CloudTrail logs for security analysis to troubleshoot operational and security incidents, or for auditing and resource tracking for your Amazon ML API usage. You can receive logs for Amazon Machine Learning API calls when you turn on AWS CloudTrail from the AWS CloudTrail console.

AWS Config continuously records changes to the configuration of your AWS resources and notifies you of these changes through Amazon Simple Notification Service (SNS). Config rules monitor these resources for compliance with desired configurations you specify.

Tag-based, resource-level permissions and the ability to apply default access privileges to new database objects make it easier to manage access control in Amazon Redshift. In addition, you can now use the Amazon Redshift COPY command to load data in BZIP2 compression format. More details on these features below:

The IAM console now displays service last accessed data that shows the hour when an IAM entity (a user, group, or role) last accessed an AWS service. Knowing if, and when an IAM entity last exercised a permission can help you remove unnecessary rights and tighten your IAM policies with less effort. This helps you write more secure access control policies that better adhere to the principle of least privilege—that is, granting only the permissions required to perform a task.

We are excited to announce new operating system support and new VM Import/Export and AWS Management Portal features that make it easier to migrate virtual machines from your on-premises environments to Amazon EC2.

You can now query for prices of AWS Services using the AWS Price List API. You can also subscribe to an SNS topic and receive notifications when AWS prices update. For example, when new instance types are launched, when services are introduced in new regions or when new services are introduced you will get an SNS notification. AWS Price List API makes prices available in two formats, JSON and CSV.

We are pleased to announce that Elastic Load Balancing now supports automatic re-registration of back-end instances when they are stopped and restarted. You can reduce costs by using the ability in Amazon EC2 to stop your EBS-backed instances when they are not needed. With the automatic re-registration functionality, when you restart a stopped instance that is registered with a load balancer, it is automatically brought back into service with the load balancer.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

Amazon Aurora now allows you to encrypt your databases using keys you manage through AWS Key Management Service (KMS). On a database instance running with Amazon Aurora encryption, data stored at rest in the underlying storage is encrypted, as are the automated backups, snapshots, and replicas in the same cluster. Encryption and decryption are handled seamlessly so you don’t have to modify your application to access your data. When you create a new Aurora database instance, you can choose to enable encryption via the AWS Management Console or API. You may use the default RDS key automatically created in your account or use a key you created using KMS to encrypt your data. For more information about the use of AWS Key Management Service with Amazon Aurora, see the Amazon RDS User's Guide. To learn more about AWS KMS, visit the AWS KMS overview page.

Starting today, Amazon Relational Database Service (RDS) allows you to run either MySQL 5.5 or MySQL 5.6 on supported instances. For MySQL 5.5, this now includes standard (M4), memory-optimized (R3), and burst-capable (T2) instances. For more information on the instances available through Amazon RDS, please visit the Instance Classes page.

We are pleased to announce that Amazon Web Services has opened an office in Turkey to help support the growth of the Amazon Web Services (AWS) cloud and its rapidly expanding customer base in the country. The office is now open and operational in Istanbul and is supporting businesses of all sizes, from start-ups to Turkey's oldest and most established enterprises, as they make the transition to the AWS cloud.

To learn more about working with AWS in Turkey, customers can visit the AWS Turkey page.

You can now set up your Amazon Machine Learning (Amazon ML) model evaluations to be more accurate through a random splitting strategy, enabling you to train and evaluate ML models based on random subsets of input data records. Random splitting may be the best strategy to establish that your evaluation data is representative of your training data, ensuring that your model evaluation is correct. You can choose your splitting strategy through the Amazon ML console or API, and receive alerts when the training and evaluation data are not similar, enabling you to select a different data splitting strategy for the next model iteration.

AWS Directory Service now lets you run a Microsoft Active Directory (AD) as a managed service. AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also referred to as Microsoft AD, is powered by Windows Server 2012 R2. When you select and launch this directory type, it is created as a highly available pair of domain controllers connected to your virtual private cloud (VPC). The domain controllers run in different Availability Zones in a region of your choice. Host monitoring and recovery, data replication, snapshots, and software updates are automatically configured and managed for you.

Microsoft AD enables you to run directory-aware workloads in the AWS cloud, including Microsoft SharePoint, custom .NET and SQL Server-based applications. You can also configure a trust relationship between Microsoft AD in the AWS cloud, and your existing on-premises Microsoft Active Directory, providing users and groups with access to resources in either domain, using single sign-on (SSO).

We are excited to announce the release of Amazon Route 53 Traffic Flow, an easy-to-use and cost-effective traffic management service that lets you manage how your end-users are routed to your application’s endpoints—whether in a single AWS region or distributed around the globe. End user routing is managed via simple policies you can build like a flowchart in the Amazon Route 53 console, and then import or export via the Amazon Route 53 API in JSON format.

You can now use Chef 12 to configure and manage Linux on your Amazon EC2 and on-premises instances. This is in addition to existing Chef 12 support in AWS OpsWorks for Windows. OpsWorks is a service that helps you automate operational tasks like code deployment, software configurations, package installations, database setups, and server scaling using Chef.

Quick Starts are automated reference deployments for key workloads on the AWS cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

We are pleased to announce that we’ve added a new edge location in Chicago, Illinois for Amazon CloudFront and Amazon Route 53. The new edge location helps improve performance and availability for end users of your application and supports all Amazon CloudFront and Amazon Route 53 features at no additional cost. With the addition of the Chicago edge location, there are now a total of 21 edge locations in the US and 54 worldwide.

Quick Starts are automated reference deployments for key workloads on the AWS cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

You can now create and model links between different application components (for example: frontend and worker, or frontend and API) when developing microservices-based applications in AWS Elastic Beanstalk. Previously, you had to manually hardcode links between components, making it difficult to manage and update multi-component applications. Now, you can easily model dynamic links between application components and then the components of your application can be updated as a group, or individually, using a configuration template and a single command. Links between application components are modeled using the AWS Management Console, CLI, or SDK.

You can now provision Amazon EC2 Dedicated Hosts, physical servers with EC2 instance capacity fully dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server (subject to your license terms), and can also help you meet compliance requirements.

You can now use AWS Config to assess license compliance on Dedicated Hosts by turning on recording for instances and hosts. AWS Config records when instances are launched, stopped, or terminated on a Dedicated Host, and pairs this information with host and instance level information relevant to software licensing, such as Host ID, Amazon Machine Image (AMI) IDs, number of sockets and physical cores. This enables you to use AWS Config as a data source for license reporting.

AWS CloudTrail integration with Amazon CloudWatch Logs is now available in AWS GovCloud (US). With this feature, you can monitor for specific API activity and receive email notifications when those specific API calls are made.

You can now specify multiple Availability Zones and VPC subnets in a single Amazon EC2 Spot fleet launch specification. Because the Spot price fluctuates independently for each Availability Zone, listing more availability zones can enable you to run your instances at even lower prices. By default, Spot instances will be placed into the zones where the Spot price is lowest at the time of launch.

AWS Elastic Beanstalk now lets you view detailed health metrics for your applications from the Elastic Beanstalk Management Console. You can now view health status, metrics, and causes for individual EC2 instances within your environments. Previously, this view was limited to the Elastic Beanstalk CLI.

You can now preview real-time predictions directly within the Amazon Machine Learning (Amazon ML) console, before creating your smart applications. With this new feature, no code is necessary to request individual predictions and immediately see the results. You simply provide the content of a data record within the Amazon ML console, push a button, and quickly review the response. There is no additional cost to try real-time predictions from the Amazon ML console.

Visit the documentation for more information on trying real-time predictions from the Amazon Machine Learning console.

You can now test iOS, Android, and web apps against a large (and growing) collection of real phones and tablets without the complexity and expense of deploying and maintaining your own device labs and automation infrastructure. Simply select the tests that you want to run against your app, choose the devices from a fleet of unique device/OS combinations, and within minutes you receive detailed reports that pinpoint bugs and performance problems.

Starting today, you can upgrade your RDS PostgreSQL database instance from major version 9.3 to 9.4 with just a few clicks on the AWS Management Console. PostgreSQL 9.4 offers several performance and feature benefits over PostgreSQL 9.3 for many customers. This includes support for the JSONB datatype, which allows you to manage schemas more flexibly. We encourage you to read about what's new in PostgreSQL 9.4 and test your application for compatibility before you upgrade your databases to version 9.4. To upgrade, simply select the "Modify" option on the AWS Management Console corresponding to the DB instance you want to upgrade, choose the version of PostgreSQL 9.4 you want to upgrade to, and proceed with the wizard. Some of the earlier versions may not be available for upgrade to avoid the possibility of upgrading to a version that was released by the community before your current version. The upgrade may be applied immediately (if you select the "Apply Immediately" option), or during your next maintenance window (by default). Please note that in either case, your database instance will have an availability impact for a few minutes as the upgrade completes, and your database instance is rebooted. Please review the AWS User Guide to learn more.

Amazon VPC Endpoints for S3 is now available in US GovCloud Region. Amazon VPC endpoints are easy to configure and provide reliable connectivity to Amazon S3 without requiring an internet gateway or a Network Address Translation (NAT) instance. With VPC endpoints, the data between the VPC and S3 is transferred within the Amazon network, helping protect your instances from internet traffic.

Amazon VPC Endpoints for Amazon S3 provides two additional security controls to help limit access to S3 buckets. You can now require that requests to your S3 buckets originate from a VPC using a VPC endpoint. Additionally, you can control what buckets, requests, users, or groups are allowed through a specific VPC endpoint.

Amazon VPC Endpoints for Amazon S3 is available in the US Standard, US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and US GovCloud regions.

Qualified researchers can now access two of the world’s largest collections of cancer genome data as AWS Public Data Sets.

The Cancer Genome Atlas (TCGA) corpus of raw and processed genomic, transcriptomic, and epigenomic data from thousands of cancer patients is now freely available on Amazon S3 for registered users of the Cancer Genomics Cloud, one of the funded cancer cloud pilots of the National Cancer Institute.

Access to the TCGA and ICGC data sets on AWS will be administered by third-party Trusted Partners who will also curate a set of cloud-based tools to accelerate use of the data. Making these data and tools available in the cloud to qualified researchers drastically lowers the barrier to entry in working with petabyte scale cancer genomic data, enhance collaboration across research groups, and potentially accelerate the development of new treatments for cancer patients.

Today, AWS Identity and Access Management (IAM) launched the preview of the Short Message Service (SMS) multi-factor authentication (MFA) option for verifying IAM users when signing in to the AWS Management Console. MFA is a security best practice that adds an extra layer of protection in addition to users’ passwords. Until now, you could enable MFA for IAM users only with hardware or virtual MFA tokens, but this new feature enables you to use the text messaging functionality of a mobile phone to verify IAM users with MFA. After enabling SMS MFA, users are prompted for their password, as well as for an MFA security code that can only be received on their mobile phone via SMS. To sign up for the preview, register on the AWS MFA page.

Spot instances now support Amazon EC2’s storage-optimized, high I/O i2 instance types. These instances provide very fast SSD-backed instance storage that is optimized for very high random I/O performance at a low cost. With i2 Spot instances, you can add high-memory worker nodes to your Hadoop and Spark clusters, perform data-intensive social media and advertising analytics, and simulate production environments for your continuous integration and deployment applications.

We are pleased to announce that Amazon ElastiCache now supports Redis version 2.8.23 with enhanced capabilities. Customers can launch new clusters with Redis 2.8.23, as well as upgrade existing ones to the new engine version.

Today, we updated Cost Explorer—AWS’s interactive visual cost reporting tool—to allow users to better manage their custom cost reports. As Cost Explorer’s reporting capabilities have expanded to include many options on which to filter and group costs (e.g., by Account, Service, Tag, Availability Zone, Purchase Option, and API Operation) and the ability to display budgeted and forecasted costs, the number and utility of Cost Explorer reports created by users has increased significantly. In order to help users better manage those reports, this new feature allows users to easily save, access, modify, and share those reports in Cost Explorer. To illustrate, a user could create a report displaying monthly costs grouped by service for all costs tagged with the key “Department” and value “Marketing” incurred during the 3rd quarter of this year, apply the title “Q3 2016 AWS Spend – Marketing Dept.,” and click the save button. All approved users within the organization could then easily access that report via Cost Explorer’s report title drop-down. If the user later needs to modify this report, they can do so with as few as two clicks in the “View/Manage All Reports” screen.

The Amazon DynamoDB console has been redesigned to make it easier to create and manage your tables. A simplified interface and clean look give you easy access to the main DynamoDB features. You’ll find tips and help options throughout the console to guide you through the process of creating and configuring your tables. It’s now easier than ever to add new items, scan, filter, and query your tables, and update and delete items right within the new interface.

You can now launch M4 instances when using Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle and SQL Server. M4 Instances have a good balance of compute, memory and network resources and are EBS-optimized by default. M4 instances comes in 5 sizes ranging from 2 to 40 vCPUs and 8 GB to 160 GB of memory, and have superior performance and lower prices than RDS M3 instances. M4 instances feature a custom Intel Xeon E5-2676 v3 Haswell processor optimized specifically for AWS. They run at a base clock rate of 2.4 GHz and can go as high as 3.0 GHz with Intel Turbo Boost. To learn more about the benefits of the M4 instance families on Amazon RDS, visit the DB Instance Classes page.

It's now possible to test your iOS apps with tests you've written in Swift, a new language that can replace or augment your Objective-C code, on AWS Device Farm. To get started, simply choose XCTest when creating a new run and upload your Swift scripts using the steps outlined in the documentation. No configuration or modification to your app or scripts is necessary. If you have any questions, please let us know in the forums.

You can now define stage variables to configure the different deployment stages (e.g., alpha, beta, production) of your API. Stage variables are name-value pairs associated with a specific API deployment stage and act like environment variables for use in your API setup and mapping templates. For example, you can configure an API method in each stage to connect to a different backend endpoint by setting different endpoint values in your stage variables. Learn more about stage variables in our documentation.

You can now deploy scalable Oracle Real Application Clusters (RAC) on Amazon EC2 using the recently-published tutorial and Amazon Machine Images (AMI) on AWS Marketplace. Oracle RAC is a shared-everything database cluster technology from Oracle that allows a single database (a set of data files) to be concurrently accessed and served by one or many database server instances.

We are happy to announce that you can now use an Amazon Kinesis Firehose to stream your log data from Amazon CloudWatch Logs. This new capability allows you to stream your log data to any destination that Firehose supports including Amazon S3 and Amazon Redshift. Amazon CloudWatch Logs also allows you to stream your log data to Amazon Kinesis Streams and AWS Lambda functions.

You can now enable CORS (cross-origin resource sharing) with one click directly in the Amazon API Gateway console. CORS allows methods in API Gateway to request restricted resources from a different domain (e.g., a JavaScript client that calls an API deployed on a different domain). Please read the documentation to learn how to enable CORS.

Today, AWS Identity and Access Management (IAM) updated the IAM policy simulator to help you to test, verify, and understand resource-level permissions in your account. The policy simulator is a tool that lets you examine and validate the permissions your policies set. Now, the policy simulator will automatically provide a list of resources that must be set in order to simulate the action accurately. For example, when you simulate a call to EC2 runInstances in the policy simulator, now you will be prompted for the six resources (e.g. instance, security group, volume, subnet, image, and network interface) required in order for users to successfully perform this action. These enhancements to the simulator can help you verify that your policies work as expected. Using the IAM policy simulator console or APIs you can now simulate the exact scenario in which your users or applications call an AWS action.

We have released four new features for our VPC VPN product. Starting today the VPN product now supports AES 256, SHA-2, additional Diffie Hellman groups, and NAT Traversal. In addition to those new features, you can also re-use your Customer Gateway (CGW ) IP address. You no longer need a unique IP address for each connection you create.

Starting today, you can share your RDS database snapshots with another AWS account, or make your snapshots publicly available. You may choose to share your database snapshots with up to 20 AWS accounts by selecting the "Share Snapshots" option on the RDS console and choosing the "Private" option. You may also choose to make your data available to all AWS users by selecting the "Public" option. You can revoke access to privately or publicly shared snapshots at any time. If you are the recipient of a shared snapshot, you can copy it to your own account, or restore a database instance directly from a shared snapshot. Sharing is currently limited to unencrypted manual snapshots only. We added this feature so you can easily transfer data between different accounts used for different purposes or with different requirements, as well as to let you publish your data set publicly. Learn more about sharing your database snapshots: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html.

Amazon Simple Queue Service (SQS) now has an Extended Client Library that enables you to send and receive messages with payloads up to 2GB. Previously, message payloads were limited to 256KB. Using the Extended Client Library, message payloads larger than 256KB are stored in an Amazon Simple Storage Service (S3) bucket, using SQS to send and receive a reference to the payload location.

You can now access real-time and archival weather radar data as an AWS Public Data Set. The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites that detects precipitation and atmospheric movement and disseminates data in approximately 5 minute intervals from each site. NEXRAD enables severe storm prediction and is used by researchers and commercial enterprises to study and address the impact of weather across multiple sectors.

You can now launch Amazon RDS for SQL Server 2014 Express, Web and Standard Editions across all commercial AWS regions. You can also easily upgrade your existing Amazon RDS for SQL Server 2008 and 2012 instances to Amazon RDS for SQL Server 2014 with the click of a button in the AWS Management Console.

Microsoft SQL Server 2014 Standard Edition now supports up to 128GB of RAM. This means that you can create Amazon RDS for SQL Server 2014 Standard Edition on larger instances such as the R3.4xlarge to take advantage of the larger memory size to improve your database performance. Coupled with database storage sizes of up to 4TB, and storage performance with up to 20,000 input/output operations per second (IOPS), Amazon RDS for SQL Server helps meeting the most demanding database needs. In US East (N. Virginia), US West (Oregon) and EU (Ireland) AWS regions, Amazon RDS for SQL Server 2014 Standard Edition also offers High Availability using Multi-AZ.

Amazon RDS for SQL Server 2014 Express and Web Editions are available as a License Included offering while the Standard Edition is available in both the Bring Your Own License as well as License Included offerings. To create a new Amazon RDS for SQL Server 2014 instance with just a few clicks, use the "Launch DB Instance" wizard in the AWS Management Console, choose SQL Server and select “12.00.4422.0.v1” in the DB Engine Version option. Learn more by visiting the Amazon RDS for SQL Server page.

We are excited to announce the launch of Amazon EC2 Run Command, a feature that enables you to securely manage the configuration of your Amazon EC2 Windows instances. Run Command provides a simple way of automating common administrative tasks like executing scripts, running PowerShell commands, installing software or patches, and more. Run Command allows you to deploy these commands across multiple instances and provides visibility into the results, making it easy to manage configuration change across fleets of instances. Through integration with AWS Identity and Access Management (IAM), you can control the actions users can perform against a set of instances. AWS CloudTrail records all actions taken with Run Command, so you can seamlessly audit change throughout your environment.

Today, AWS Identity and Access Management (IAM) made it easier to verify permissions using the policy simulator by adding support for resource-based policies, such as Amazon S3 bucket policies. This new feature extends the capabilities of the policy simulator to help you understand, test, and validate how your resource-based policies and IAM policies work together to grant or deny access to your IAM entities (users, groups, and roles). Using the IAM policy simulator or APIs you can include resource-based policies for Amazon S3 buckets, Amazon Glacier vaults, Amazon SNS topics, and Amazon SQS queues in your simulations.

To get started navigate directly to the IAM Policy Simulator, choose the user, group, or role you wish to verify access to, and specify an Amazon Resource Name (ARN) in the ‘Simulations Settings’. To get started using the SimulatePrincipalPolicy or SimulateCustomPolicy API pass in the resource-based policy when invoking the API. You can learn more about resource-based policies in the IAM policy simulator by visiting the AWS security blog.

CodePipeline is a continuous delivery service for fast and reliable application updates. CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to rapidly and reliably deliver features and updates. You can easily build out an end-to-end solution by using our pre-built plugins for popular third-party services like GitHub or integrating your own custom plugins into any stage of your release process. With CodePipeline, you only pay for what you use. There are no upfront fees or long-term commitments.

When an EBS volume is encrypted, data stored at rest on the volume, disk I/O, snapshots created from the volume, and data in-transit between EBS and Amazon Elastic Compute Cloud (EC2) are all encrypted. Now, you can benefit from this level of data protection with EBS-backed EC2 instances managed through Auto Scaling.

This feature is free to use and available for all EC2 instance types that support EBS encryption. To get started, navigate to the Auto Scaling section of the EC2 console, create a new launch configuration, and add an encrypted EBS volume at step 4 (“Add Storage”). To learn more about this feature, review the Auto Scaling documentation.

Recently, Amazon EMR introduced software releases referenced by their release label rather than the AMI version. AWS Data Pipeline now supports Amazon EMR 4.x software releases. You can specify the required release in the releaseLabel field on the EmrCluster object in your pipeline. Using the EmrConfiguration object, you can specify cluster configurations, such as choosing the applications to be installed at cluster creation, or setting the Apache Hadoop environment variables.

You can now use Amazon Elastic Transcoder to embed CEA-708 captions in the H.264 Supplemental Enhancement Information (SEI) user data in any MP4 or MPEG-TS output format. This allows you to deliver closed captions to televisions or to legacy iOS devices via HLS.

You can try this feature by choosing the “cea-708” option as the Caption Format when creating a transcoding job. For more information, see Captions in the Elastic Transcoder Developer Guide. There are no additional charges for using CEA-708 captions with your output formats.

Today we are excited to announce the launch of the Amazon WorkSpaces Connection Health Check Website. Now, WorkSpaces users who want to know if their local network meets the requirements to use Amazon WorkSpaces can simply visit the website and find out. The website quickly checks if you can get to all of the required services to use WorkSpaces. In addition, it does a performance check to each AWS Region where WorkSpaces run, and lets users know which one will be fastest for them.

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate software deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.

AWS CodeDeploy is also available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS regions.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

AWS Service Catalog is currently available in the US East (N. Virginia), US West (Oregon) and EU (Ireland) regions. For more information, please visit our webpage.

AWS Key Management Service (KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data. You can create, enable, disable, rotate and, starting today, delete your keys in KMS giving you even greater control over the lifecycle of your keys.

The AWS SDK for Unity now supports AWS Lambda, so you can easily execute cloud functions from your games built on Unity, without the need to provision infrastructure. In addition, the AWS SDK for Unity now supports the Amazon Simple Email Service (SES), so you can send transactional emails directly from games built in Unity. The AWS Mobile SDK for Unity is compatible with Unity 4.0 and onwards, and supports both the Free and Unity Pro versions.

AWS Config and Loggly now offer an integrated solution that helps you analyze configuration changes recorded by AWS Config in real time. Using this solution, you can gain a high-level view of configuration changes or analyze potential issues without typing even a single search query. Furthermore, you can get a real-time, visual view of resources in your account, save frequently used searches and set up alerts on particular configuration changes.

AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected.

Today we announced AWS IoT (Beta), managed cloud service that lets connected devices easily and securely interact with cloud applications and other devices. Using the AWS IoT Device SDK you can connect a variety of standard hardware devices, from basic to industrial, with AWS IoT platform. Our hardware partners have tested the IoT Device SDK against their boards and developed AWS IoT Starter kits to get you started with AWS IoT.

Amazon CloudWatch Dashboards enable you to create re-usable graphs of AWS resources and custom metrics so you can quickly monitor operational status and identify issues at a glance. With CloudWatch Dashboards, you can monitor operational metrics with custom graphs of your AWS resources from a single pane of glass. You can create text-based or graphical widgets, add custom text annotations and links to graphs, change the time range, resize and reorganize widgets, and reuse and share dashboards. To get started, visit the Dashboards section in the CloudWatch Management Console.

You can now develop your AWS Lambda function code using Python. AWS Lambda lets you run code without provisioning and managing servers. You simply upload your Lambda Python code as a ZIP through the AWS CLI or AWS Lambda console and Lambda takes care of everything required to run and scale your code with high availability. Read our documentation for more details

You can now use the Amazon EC2 Container Service CLI (Amazon ECS CLI) to simplify your local development experience as well as easily set up an Amazon ECS cluster and its associated resources (e.g., EC2 instance). The Amazon ECS CLI supports Docker Compose, an open-source tool for defining and running multi-container applications. You can use the same Compose definition used to define a multi-container application on your development machine as well as in production. The Amazon ECS CLI is open source and available for download here. Read more about the CLI in our documentation.

With the new AWS Mobile Hub, you can add and configure features to your apps, including user authentication, data storage, backend logic, push notifications, content delivery, and analytics – all from a single, integrated console. AWS Mobile Hub then automatically provisions the AWS services required to power these features, and generates quickstart apps for iOS and Android that use your provisioned services. You can use the quickstart app as a foundation for your app, or cut and paste code snippets from it to your own app.

Amazon Kinesis, which we are now calling Amazon Kinesis Streams, stores incoming data for a period of 24 hours by default. Amazon Kinesis Streams can now retain your streaming data for up to 7 days. You can dynamically configure the data retention period through an API call. For more information about extended data retention, visit Changing Data Retention Period. For extended data retention pricing, visit Amazon Kinesis Pricing.

Amazon Kinesis Streams is a managed service that enables you to build custom applications that process or analyze streaming data for specialized needs. Amazon Kinesis Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources, and make it available for your stream processing applications. To learn more, see the Amazon Kinesis Streams website.

We’re pleased to announce AWS Import/Export Snowball, a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

Amazon Aurora is now available to customers in AWS Asia Pacific (Tokyo) region. Amazon Aurora is a MySQL-compatible relational database management system (RDBMS) that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial RDBMS while delivering similar performance and availability.

Amazon Inspector is an automated security assessment service that helps minimize the likelihood of introducing security or compliance issues when deploying applications on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed report with prioritized steps for remediation. To help you get started quickly, Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security compliance standards (e.g. PCI DSS) and vulnerability definitions. Examples include enabling remote root login, or including vulnerable software versions. These rules are regularly updated by AWS security researchers.

We are pleased to announce the immediate availability of Amazon Relational Database Service (Amazon RDS) for MariaDB. Amazon RDS makes it easy to set up, operate, and scale MariaDB deployments in the cloud. Through Amazon RDS, MariaDB is now available as a fully-managed service on AWS with up to 6TB of storage, 30,000 IOPS, and support for high-availability deployments. Amazon RDS automatically handles database tasks such as provisioning, patching, backup, recovery, failure detection, and repair; freeing you up to focus on your application. MariaDB joins Amazon Aurora, MySQL, Oracle, Microsoft SQL Server, and PostgreSQL as the sixth relational database engine available to customers through Amazon RDS.

New customers can get started with a Free Usage Tier, which provides for one year free usage of Amazon RDS micro instance with 750 usage hours, 20GB of storage, and 10 million I/O requests per month. Amazon RDS for MariaDB is available in all commercial regions. You can start running production workloads from day one with high availability using Multi-AZ.

AWS Config Rules is a new set of cloud governance capabilities that allow IT Administrators to define guidelines for provisioning and configuring AWS resources and then continuously monitor compliance with those guidelines. AWS Config Rules lets you choose from a set of pre-built rules based on common AWS best practices or custom rules that you define. For example, you can ensure EBS volumes are encrypted, EC2 instances are properly tagged, and Elastic IP addresses (EIPs) are attached to instances. AWS Config Rules can continuously monitor configuration changes to your AWS resources and provides a new dashboard to track compliance status. Using Config Rules, an IT Administrator can quickly determine when and how a resource went out of compliance. Learn more >>

The AWS Database Migration Service makes it easy for customers to migrate production databases to AWS with minimal downtime. You can keep your applications running while you are migrating your database and the AWS Database Migration Service ensures that data changes to the source database that occur during and after the migration are continuously replicated to the target. Migration tasks can be setup in minutes in the AWS Management Console. The AWS Database Migration Service can migrate your data to and from all widely used database platforms, such as Oracle, SQL Server, MySQL, PostgreSQL, Amazon Aurora, and MariaDB. The service supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or SQL Server to MySQL. The AWS Database Migration service can be used to perform one-time migrations, or can maintain continuous replication between databases without a customer having to install or configure any complex software.

The AWS Schema Conversion Tool, a feature of the AWS Database Migration Service, ports database schemas and stored procedures from one database platform to another, so customers can move their applications from Oracle and SQL Server to Amazon Aurora, MySQL, MariaDB, and soon PostgreSQL.

Pricing for AWS Database Migration Service is low cost. You can migrate a 1TB database from on-premises to AWS for as little as $3. The AWS Schema Conversion Tool can be downloaded to your desktop and its usage is free.

The AWS Database Migration Service is now in preview in US East (N. Virginia) region. Customers can sign up for Preview today and their accounts will be whitelisted as capacity becomes available. The AWS Schema Conversion Tool is available immediately for download after sign up.

Two years ago we introduced Amazon Kinesis, which we now call Amazon Kinesis Streams, to allow customers to build applications that collect, process, and analyze streaming data with very high throughput. Many customers use Amazon Kinesis Streams to capture streaming data and load it into Amazon S3 or Amazon Redshift. Until now, this required customers to manage the Amazon Kinesis data streams and write custom code to load the data. We are now introducing Amazon Kinesis Firehose, a fully managed service that makes this as easy as an API call.

Amazon Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture and automatically load streaming data into Amazon S3 and Amazon Redshift, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today. It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security. You can easily create a Firehose delivery stream from the AWS Management Console, configure it with a few clicks, and start sending data to the stream from hundreds of thousands of data sources to be loaded continuously to AWS – all in just a few minutes.

With Amazon Kinesis Firehose, you only pay for the amount of data you transmit through the service. There is no minimum fee or setup cost.

Amazon Kinesis Firehose is currently available in the following AWS Regions: N. Virginia, Oregon, and Ireland. To learn more about Amazon Kinesis Firehose, see our website, this blog post, and the documentation. To get started with Amazon Kinesis Firehose, visit the AWS Management Console.

You can now request Amazon EC2 Spot instances to run continuously, for up to six hours, at a flat rate that saves you up to 50% compared to On-Demand prices. This enables you to reduce costs when running finite duration tasks such as batch processing, encoding and rendering, modeling and analysis, and continuous integration jobs.

To get started, specify the duration you want your instance(s) to run – between one and six hours – when placing a Spot instances request. When Spot instance capacity is available for the requested duration, your instances will launch and run for that duration for a flat hourly price. Once the time block ends, your instance will be terminated automatically.

To learn more about how to request Spot instances that run for a defined duration, visit Using Spot Blocks in the Amazon EC2 User Guide. Pricing is based on block duration and available capacity. To view current Spot instance prices, visit Amazon EC2 Spot Pricing.

An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts can help you address compliance requirements and reduce costs by allowing you to use your existing server-bound software licenses.

The AWS Partner Network (APN) is thrilled to announce the launch of the new APN DevOps Competency. The APN Competency Program is designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas and categories.

APN DevOps Competency Partners provide solutions to, or have deep experience working with businesses to help them implement continuous integration and delivery development patterns or helping them automate infrastructure provisioning and management with configuration management tools on AWS.

DevOps Launch Partners

Congrats to our initial launch partners who have qualified for the DevOps Competency:

AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web application by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns. Also, AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of web security rules.

You can now view the contents of an AWS CodeCommit repository from the AWS Management Console. You can select the repository and branch you want to view, browse directories, and view file contents complete with syntax highlighting and line numbers.

Amazon WorkSpaces has added three new features to improve cost, accessibility, and security of your data:

Bring Your Own Windows 7 Licenses: You can now bring your existing Windows 7 Desktop Licenses to Amazon WorkSpaces and run the Windows 7 Desktop OS on hardware that is physically dedicated to you. Eligible organizations will be entitled to a discount of $4 per WorkSpace per month, for a savings of up to 16%. To learn more about this new licensing option and eligibility requirements, please see the Amazon WorkSpaces FAQ.

Chromebooks Support: In addition to Windows and Mac computers, iPads, Kindle Fire tablets, and Android tablets, users can now access their WorkSpace from Google Chromebooks. These “thin-client” laptops are simple to manage and designed for Internet users, making them a great match for Amazon WorkSpaces.

Encryption: Amazon WorkSpaces now integrates with the AWS Key Management Service (KMS) to provide you the ability to encrypt the Amazon WorkSpaces storage volumes for protecting data both at-rest and in-flight. You simply enable encryption as part of the Amazon WorkSpaces configuration process in the AWS Management Console.

Support for log file encryption using Server Side Encryption - Key Management Service (KMS) You can add an additional layer of security for the CloudTrail log files stored in your S3 bucket by encrypting them with your KMS key. CloudTrail will encrypt the log files using the KMS key you specify.

Log File Integrity Validation You can validate the integrity of the CloudTrail log files stored in your S3 bucket and detect whether they were deleted or modified after CloudTrail delivered them to your S3 bucket. You can use the log file integrity (LFI) validation as a part of your IT security and auditing processes.

Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

AWS CloudFormation Designer is a new visual tool that shows your CloudFormation templates as a diagram and lets you edit your templates. It provides a drag-and-drop interface for adding resources to templates and when you add or remove resources, CloudFormation Designer automatically modifies the underlying JSON for you. You can also use the integrated text editor to view or specify template details, such as resource property values and input parameters.

We are happy to announce that you can now use AWS Lambda to process your Amazon CloudWatch Logs events in near real-time. This new functionality enables you to configure a Lambda function to handle your log events that are sent to CloudWatch Logs. With CloudWatch Logs, you can perform near real-time monitoring of log data from your servers or AWS resources. Integration with Lambda expands this capability to include any code that you can run in a Lambda function.

Information security is deeply important to our customers and one of today’s most sought after IT specialties. So today we’re launching a new AWS Training curriculum to help customers meet their cloud security objectives under the AWS Shared Responsibility Model. The curriculum’s two new classes are designed to help customers create more secure AWS architectures and solutions and address key compliance requirements.

You can now scale the target capacity of a fleet of Spot instances up or down using the ModifySpotFleetRequest API. Previously, you could not modify the fleet’s target capacity after submitting a Spot fleet request.

AWS Support announces two new AWS Trusted Advisor checks that offer best practices for using CloudFront, focusing on security enhancement and performance improvement:

CloudFront Header Forwarding and Cache Hit Ratio (Performance category): Checks for HTTP request headers that CloudFront forwards to the origin that might significantly reduce the cache hit ratio and increase the load on the origin.

CloudFront Custom SSL Certificates in the IAM Certificate Store (Security category): Checks for SSL certificates for CloudFront alternate domain names in the IAM certificate store that are expired, will soon expire, use outdated encryption, or are not configured correctly for the distribution.

For more information on Trusted Advisor and descriptions of all 43 checks, visit AWS Trusted Advisor.

You can now run Java applications with embedded servlet containers and web servers and are no longer restricted to using Tomcat as the application server for your Java applications. Both Java 7 and Java 8 are supported. This is in addition to existing support for Tomcat, .NET, PHP, Node.js, Python, Ruby, Go, and Docker applications and services. Read our blog to learn more about running Java applications in Elastic Beanstalk. Please see the documentation for more information on Java support.

You can now run non-Docker Go applications. This is in addition to existing support for Java, .NET, PHP, Node.js, Python, Ruby, and Docker applications and services. Read our blog to learn more about running Go applications in Elastic Beanstalk. Please see the documentation for more information on Go-based applications.

In addition to offering a scalable, cost-effective email-sending platform, Amazon SES can now accept your incoming emails. You can configure Amazon SES to deliver your messages to an Amazon S3 bucket, call your custom code via an AWS Lambda function, or publish notifications to Amazon SNS. You can also configure Amazon SES to drop or bounce messages you do not want to receive. If you choose to store your messages in Amazon S3, Amazon SES can encrypt your mail using AWS Key Management Service (KMS) before writing it to the bucket.

The latest engine version of Amazon ElastiCache for Redis now comes with several enhancements:

More usable memory: You can now safely allocate more memory for your application without the risk of increased swap usage during syncs and snapshots.

Improved synchronization: Improved output buffer management provides more robust synchronization under heavy load and when recovering from network disconnections. Additionally, syncs are faster as both the primary and replicas no longer use the disk for this operation.

Smoother failovers: In the event of a failover, your cluster now recovers faster as replicas will avoid flushing their data to do a full re-sync with the primary.

AWS Device Farm helps you improve the quality of your iOS, Android and Fire OS apps by testing them against real smartphones and tablets in the AWS Cloud. You can use AWS Device Farm plugins to automatically initiate tests from continuous integration systems including Android Studio/Gradle and Jenkins.

We are excited to announce that Amazon Glacier has received a third-party assessment that details how Amazon Glacier with Vault Lock can be used to meet the electronic books and records storage requirements of SEC Rule 17a-4(f). The assessment is provided by Cohasset Associates, Inc., a highly respected consulting firm with over 40 years of experience and knowledge related to the legal, technical, and operational issues related to records management.

“It is Cohasset’s opinion that Glacier, when properly configured and utilized in conjunction with Vault Lock to store and retain records in non-erasable and non-rewriteable format, meets the relevant storage requirements of SEC Rule 17a-4(f) and CFTC Rule 1.31(b)-(c).”

If your organization is subject to the aforementioned SEC or CFTC regulations, you can include Cohasset Associates’ independent assessment of Amazon Glacier in your required compliance filings detailing your intended use of Amazon Glacier for regulatory compliance storage. You can download a copy of the full Cohasset Assessment and read the AWS blog to learn more.

You can now launch T2.large instances when using Amazon RDS for MySQL, PostgreSQL, Oracle and SQL Server across all commercial regions. T2 instances are the low-cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline. These instances are recommended for workloads that do not use the full CPU often or consistently, but occasionally need to burst to higher CPU performance, such as small database workloads. T2.large offers double the memory size of and 50% more CPU credits per hour than t2.medium (8GB and 36 CPU credits per hour). To learn more about the benefits of the T2 instance families on Amazon RDS, visit the Instance Classes page.

T2 instance types are available with MySQL database version 5.6; PostgreSQL database versions 9.3 and 9.4; Oracle database versions 11.2.0.4.v* and 12.1.0.2.v1; SQL Server Standard and Enterprise Editions Database versions 10.50 and 11.00. You can create new T2 instances with just a few clicks using the "Launch DB Instance" wizard in the AWS Management Console. To migrate an existing instance, first upgrade your instance to one of the database versions listed previously, and then restore a snapshot of that database instance to a new T2 instance.

Amazon RDS for MySQL, PostgreSQL, Oracle and SQL Server T2.large instances are now available in all Commercial AWS regions. For more information on pricing, visit the Amazon RDS pricing page.

Effective September 1, 2015, we are decreasing the price for Virtual Tape Shelf (VTS) storage by 30-36% for Gateway-Virtual Tape Library (VTL) in the US East (Northern Virginia), US West (Oregon), Europe (Ireland) regions. Data stored in VTS is backed by Amazon Glacier, and this change reflects the storage price reduction announced by Glacier on September 16, 2015.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

Amazon WorkSpaces now uses a smaller range of EC2 public IP addresses for its PCoIP gateway servers that will enable customers to set more finely grained firewall policies for devices accessing WorkSpaces. The Amazon WorkSpaces service uses the PCoIP gateway to stream the desktop session to its client applications over port 4172.

You can now generate client-side SSL certificates in Amazon API Gateway and use the public key to verify that HTTP requests to your backend systems originated from Amazon API Gateway. Currently, integration endpoints for Amazon API Gateway are always publicly accessible to the Internet. Now, Amazon API Gateway can generate SSL certificates, such that you can use the public key of the certificate in your backend to authenticate API requests from Amazon API Gateway. This allows you to control and accept only requests originating from Amazon API Gateway, even if your HTTP backend is publicly accessible. Learn more about using client-side SSL certificates in the Amazon API Gateway Developer Guide.

Amazon Cognito helps developers onboard and authenticate users with public login providers such as Amazon, Facebook, Twitter, or a custom user login system. Amazon Cognito makes it easy to securely access resources in the AWS Cloud from mobile apps and client-side web apps without hardcoding credentials. Once a user's identity is established, developers can save user-specific data such as game state or user preferences without writing any backend code or managing any infrastructure. The data can be read and saved while offline and synchronized across all of the user's devices. Users can start out using the app as anonymous guests and when they login later their data will be seamlessly transitioned to their authenticated account. With Amazon Cognito, you can focus on creating great app experiences without having to worry about building and managing servers to handle user authentication, network state, storage, and sync.

AWS Cognito is currently available in the US East (N. Virginia), EU (Ireland), and Asia Pacific (Tokyo) regions. For more information, please visit our webpage.

Today we released substantially updated English-language versions of AWS Business Professional and AWS Technical Professional, web-based accreditation courses designed to help APN Partners stay current on AWS, articulate the benefits of AWS services, and help customers make informed decisions about IT solutions.

Both courses are available to APN Partners at no cost via the APN Portal and count toward APN tier requirements that help Partners advance. The 2015 English versions of each course add coverage of key new AWS services and features; are more concise and better align AWS solutions with APN competencies; and feature improved interactivity with new video demos and exercises. Here are the course summaries:

AWS Business Professional (Released September 2015) Who it's for: Foundational knowledge on key AWS services and core business value propositions including AWS Marketplace solutions. Who it’s for: Business roles responsible for articulating the benefits of AWS services and how AWS and partner solutions help solve common business problems.

AWS Technical Professional (Released September 2015) Who it's for: Key foundational technical concepts around AWS, including global infrastructure, services, common solutions, migration, security, and compliance. Who it’s for: Technical roles responsible for helping customers make informed decisions about IT solutions.

You can now access the genome sequence data of 3,024 rice varieties as an AWS Public Data Set. The data contains over 30 million genetic variations that span across all known and predicted rice genes, as well as potential regulatory regions surrounding these genes.

Through analysis of this data, researchers can identify important agronomic traits such as crop yield, climate stress tolerance, and disease resistance. Together, they represent an unprecedented resource for advancing rice science and breeding technology. More in-depth analyses of this dataset could lead to inferences about higher yield and stress tolerance to pests, diseases, and climate change.

Learn more about how people are using the 3000 Rice Genome on AWS on our blog.

We’re excited to announce that the AWS Pop-up Loft is coming to Berlin! Located on the 5th floor in Krausenstrasse 38, 10117 Berlin Mitte, our space in the Amazon Offices in Berlin is a great place to grab a coffee, learn about AWS, or just get some work done.

We are excited to announce a new, lower-cost Amazon S3 storage class for data that is accessed less frequently. Amazon S3 Standard - Infrequent Access (Standard - IA) offers the high durability, low latency, and high throughput of Amazon S3 Standard, but with prices starting at $0.0125 per GB per month, $0.01 per GB retrieval fee, and a 30-day storage minimum. This combination of low cost and high performance makes Standard - IA ideal for long-term file storage, backups, and disaster recovery.

You can now view how your Android app performs against real devices without writing any test scripts. The new Built-in: Explorer feature crawls your app and captures screenshots, logs, and performance data. All results and artifacts are collated into a Device Farm report and also made available through the API.

You can now choose to automatically provision your application’s resources across multiple Spot instance pools using the new diversified fleet option in the RequestSpotFleet API. This option enables you to maintain your fleet’s target capacity and increase your application’s availability as Spot capacity fluctuates. Running your application’s resources across diverse Spot instance pools also allows you to further reduce your fleet’s operating costs over time.

We are pleased to announce that Elastic Load Balancing now supports all ports as well as additional Access Log fields that improve your visibility into application request traffic.

Load balancers within a VPC no longer limit the available ports but allow for any port, between 1 and 65535, to be specified. As a result, Elastic Load Balancing can now be used together with applications and protocols that require a specific port to be used.

We are pleased to announce that Amazon ElastiCache now supports Memcached version 1.4.24. Customers can launch new clusters with Memcached 1.4.24, as well as upgrade existing ones to the new engine version. Compared to version 1.4.14 (until now the latest version supported by ElastiCache Memcached) this version adds support for LRU management as a background task, a new hashing algorithm, new commands and miscellaneous bug fixes.

For the full list of improvements in Memcached 1.4.24 click here. You can easily launch an ElastiCache Memcached cluster with engine version 1.4.24 via a few clicks on the AWS Management Console.

Today, AWS Identity and Access Management added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. The iam:SimulatePrincipalPolicy API allows you to programmatically audit permissions in your account and validate a specific user’s permissions. The iam:SimulateCustomPolicy provides a way to verify a new policy before applying it. These new APIs provide programmatic access to the IAM policy simulator, which allows you to test the effects of IAM access control policies before committing them into production using the AWS CLI or any AWS SDK.

You can now use the Amazon EC2 Spot Bid Advisor to help you determine an Amazon EC2 Spot instance bid price that suits your needs. Since Spot instances are spare Amazon EC2 capacity with prices that vary as demand fluctuates, choosing your bid price carefully can help you get the compute capacity your applications need, while meeting your budget and availability requirements. Learn more about Spot instances.

You can now take advantage of the new Reserved Instances (RI) payment options on Amazon RDS for SQL Server. This simpler payment-based model consolidates reserved instances into one RI type, while introducing three payment options: No Upfront, Partial Upfront and All Upfront RIs, offering you more flexibility to decide on how you would like to pay for your Reserved Instances.

Quick Starts are automated reference deployments for key enterprise workloads on the AWS cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

AWS Storage Gateway now allows you to tag your gateways, volumes, and virtual tapes. Tags are labels you can define and associate with your AWS resources, which provide filtering capabilities on operations such as generating AWS usage or cost reports. For example, you can use tags to allocate Storage Gateway costs and usage between departments in your organization. You can then view the Storage Gateway costs for a particular department over a period of time, or identify the department with the most Storage Gateway usage.

We are pleased to announce new APN Partners named as RDS Database Migration Partners. 2nd Watch, Apps Associates, Cloudnexa, Datapipe, Digital Edge UST Global, Logicworks, Pythian and Slalom Consulting are ready to help customers with database migration projects using the RDS Migration Tool. Each partner brings unique capabilities within their database migration practice, from specialization in cross-platform migrations to building high availability solutions to setting up a hybrid cloud. The RDS Migration Tool is a powerful utility for migrating data with minimal downtime from on-premises and EC2-based databases to Amazon RDS, Amazon Redshift and Amazon Aurora databases. It supports not only like-to-like data migrations, e.g. Oracle-to-Oracle, but also migrations between different database platforms, e.g. SQL Server-to-MySQL. The RDS Migration Tool can capture changes on the source during the data migration and apply these changes to the target to gracefully switch-over a production database to the new environment.

Now you can use Amazon API Gateway mock Integration feature to publish APIs, and let other developers build apps even before your backend is ready. The mock integration allows you to use mapping templates to generate the output of your API method without having to communicate with a backend.

You can now use AWS CloudHSM Classic in Asia Pacific (Tokyo) and Asia Pacific (Singapore) AWS regions to maintain sole and exclusive control of encryption keys you use to manage Oracle Transparent Data Encryption (TDE) in Amazon RDS database instances. In addition to these two new regions, AWS CloudHSM is already integrated with Amazon RDS for Oracle TDE in US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), EU (Frankfurt), and AWS GovCloud (US) AWS regions. AWS CloudHSM offers single-tenant Hardware Security Module (HSM) appliances within the AWS cloud. You can securely generate, store, and manage the cryptographic keys used for data encryption such that they are accessible only by you.

You can now subscribe to Amazon Simple Notification Service (SNS) notifications for changes of AWS' IP address ranges. We publish the IP address ranges for AWS and specific services, like Amazon Elastic Compute Cloud (EC2), Amazon CloudFront, and Amazon Route 53 in a JSON file. You can use these ranges to update your firewalls or other configurations. Now you can use this Amazon SNS notification to automate the updates. For more information, see http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html.

You can now automatically launch Amazon EC2 Spot instances that have the lowest price per unit of capacity using the RequestSpotFleet API. You can define and set target capacity in any unit including instances, vCPUs, memory, storage, or network throughput. Since applications typically perform differently on different instance types, you can specify how much each instance type is worth, which automatically adjusts your bid price for each instance type. This enables you to bid on multiple instance types in a single request and provision cost-effective capacity available on any instance type.

You can now estimate the cost of your Amazon Machine Learning (Amazon ML) predictions through the AWS Management Console prior to requesting the predictions. For batch predictions, the Create Batch Prediction wizard will display a cost estimate based on the data source you selected. For real-time predictions, you will now have visibility into the hourly cost of operating the real-time prediction endpoint prior to creating it.

Now you can get started faster with AWS Device Farm and test your apps on real devices in the AWS Cloud. Leverage the sample Android app and scripts to create tests specific to your app, or take them out-of-box to learn more about automation best practices and AWS Device Farm. The tests you create can run both on your local development setup and on Device Farm.

The sample app and test scripts are available on GitHub. To learn more about testing your iOS, Android and Fire OS apps on real phones and tablets, visit our webpage.

AWS Config is a fully managed service that gives you an inventory of your AWS resources, notifies you when the configurations of your resources change, and lets you audit the history of the configurations for those resources.

You can now discover your active and deleted AWS resources recorded by AWS Config by simply specifying resource types. For example, you can get a list of all active and deleted EC2 instances in your account by specifying EC2 Instance as the resource type. You can also get an inventory of all your AWS resources recorded by Config by providing multiple resource types.

We are concluding the Developer Preview for the AWS Mobile SDK for Xamarin and making it Generally Available. The AWS Mobile SDK for Xamarin makes it easier for you to deliver cross-platform iOS, Android, and Windows applications created with Xamarin that leverage AWS Services. With AWS Mobile SDK for Xamarin, you can simply connect your Xamarin-built apps to many AWS services, including identity management through Amazon Cognito, cloud storage via Amazon S3, a fully-managed NoSQL database with Amazon DynamoDB, mobile push notifications with SNS, and app analytics through Amazon Mobile Analytics.

The AWS Mobile SDK for Xamarin is included in the AWS SDK for .Net. To learn how to start using the SDK, please read our getting started guide.

AWS OpsWorks for Windows now supports custom AutoScaling based on Amazon CloudWatch Alarms and custom AMIs. Amazon CloudWatch alarms can be used as thresholds for AWS OpsWorks Automatic Load-based Scaling. For example, you can use ‘Disk Reads’ or ‘Network In’ as metrics to scale up or down your load-based instances.

Custom AMI support gives you the ability to use your own AMIs based on a Windows Server 2012 R2 base that have software such as Microsoft Windows Server 2012 R2 with SQL Server Express, SQL Server Standard or SQL Server Web preinstalled.

These additions to AWS OpsWorks Windows support benefit customers that want to start with custom built AMIs and customers that have the need to scale their infrastructure based on any metric available in Amazon CloudWatch alarms.

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.

There is no additional charge for Elastic Beanstalk - you pay only for the AWS resources needed to store and run your applications. AWS Elastic Beanstalk is also available in the US East (N. Virginia), US West (Oregon), US West (San Francisco), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore) and Asia Pacific (Sydney) AWS regions.

Now you can use the AWS Console mobile app to add or remove an instance from a load balancer. AWS ELB customers will now see options to “Add instance” and “Remove instance” on a load balancer’s detail page. Download the app from Amazon Appstore, Google Play, and iTunes to view and manage your resources on the go.

The app features support for EC2, S3, Route 53, ELB, RDS, AWS Elastic Beanstalk, CloudFormation, DynamoDB, Auto Scaling, AWS OpsWorks, CloudWatch, and the Service Health Dashboard. The app lets you authenticate several identities, so you can easily switch between multiple accounts.

Let us know how you use the AWS Console app and tell us what features you’d like to see by using the feedback link in the app’s menu

We are now reporting some of the most popular DynamoDB CloudWatch metrics in 1-minute intervals instead of 5-minute intervals to give you better operational insight. DynamoDB CloudWatch metrics let you monitor the performance of your DynamoDB tables. The increased granularity will let you make better decisions about operating your DynamoDB tables.

Please see the Monitoring DynamoDB with CloudWatch section of our documentation to learn how to access the new CloudWatch metrics for your tables.

Amazon VPC: CloudFormation can now be used to provision VPC Endpoints. The AWS::EC2::VPCEndpoint resource in CloudFormation creates a VPC endpoint that you can use to establish a private connection between your VPC and another AWS service without requiring access over the Internet, a VPN connection, or AWS Direct Connect. Currently, Amazon VPC supports endpoints for connections with Amazon S3 within the same region only.

AWS Elastic Beanstalk: There is added support for tagging AWS Elastic Beanstalk environment. Use the Tags property to specify key-value pairs for an environment.

AWS Lambda: There is expanded coverage for configuring the Code property when specifying an AWS::Lambda::Function resource. You can use the ZipFile property to write your AWS Lambda function source code directly in an AWS CloudFormation template. Currently, you can use the ZipFile property only for node.js runtime environments. You can still point to a file in an Amazon S3 bucket for all runtime environments (e.g. Java8, node.js).

Amazon RDS: CloudFormation support for Amazon RDS now allows for creating cross-region read replicas. For the SourceDBInstanceIdentifier property, you can specify a database instance in another region to create a cross-region read replica.

Amazon S3: For versioning-enabled buckets, you can specify a version ID in an Amazon S3 template URL when you create or update a stack, such as https://s3.amazonaws.com/templates/myTemplate.template?versionId=123ab1cdeKdOW5IH4GAcYbEngcpTJTDW.

We have updated base AMI images, including all bugfix and security updates that had been pushed to our repositories in the time since the 2015.03 release. Today's point release includes the 3.14.48 kernel and adds the nfs-utils package to the default package set.

You can now create machine learning models and obtain predictions with Amazon Machine Learning (Amazon ML) in the AWS EU (Ireland) region. Using Amazon ML in the EU (Ireland) region and accessing your data from Amazon S3 buckets also located in this region, you will not incur data transfer costs. Additionally, when using Amazon ML in the EU(Ireland) region, you can use data stored in Amazon Redshift clusters and Amazon RDS instances that are also hosted in this region. Using Amazon ML in the region that is nearest to your predictive applications may result in lower real-time prediction latencies. Further, availability of Amazon ML in this region may address limitations your organization may have related to data governance restrictions and other data control mandates with dependencies on geographic location.

You can now connect to all Amazon RDS databases with AWS Data Pipeline. Using the new rdsInstanceId field, specify the instance ID to configure your RdsDatabase object in AWS Data Pipeline. Additionally, using the new jdbcDriverJarUri field, you can override the default JDBC driver with your own driver to connect to the database. You can also use your own JDBC driver and connect to any database that supports a JDBC connection using the JdbcDatabase object. Lastly, you can run SqlActivity against all of these databases, including Amazon RDS and other JDBC databases.

To learn more about configuring the RdsDatabase object in AWS Data Pipeline, please visit the documentation page.

Today we’re releasing the Amazon DynamoDB Storage Backend for Titan, enabling you to store Titan graphs of any size in fully-managed DynamoDB tables. Graph databases are optimized for fast traversal of complex relationships required for social networks, recommendation engines, fraud detection, inventory management and more. Titan is a popular graph database designed to efficiently store and traverse both small and large graphs up to hundreds of billions of vertices and edges. Titan enables scalability through a pluggable storage engine architecture. Until now, Titan required you to provision, manage, and scale the storage layer. With the DynamoDB storage backend plugin for Titan, you can now offload Titan storage management to AWS. Titan’s pluggable architecture makes it easy to start using DynamoDB without changing your application. We are releasing the code for two DynamoDB storage backend plugins that integrate with Titan versions 0.5.4 and 0.4.4.

Learn more about the DynamoDB Titan storage backend plugin by reading our blog, then get started using the DynamoDB storage backend plugin for Titan by reading our Documentation. The plugins are available on GitHub here.

The AWS Partner Network (APN) is thrilled to announce the launch of the new APN Mobile Competency. The APN Competency Program is designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas and categories.

APN Mobile Competency Partners provide solutions to support developers or have deep experience working with developers, mobile-first businesses to help build, test, analyze and monitor their mobile apps on AWS.

Mobile Launch Partners

Congrats to our initial launch partners who have qualified for the Mobile Competency:

We are introducing Amazon S3 Transfer Utility for iOS. This allows an easier and more powerful way to transfer data between your iOS app and Amazon S3. We have built this utility based on feedback from customers on the Amazon S3 Transfer Manager for iOS feature.

Quick Starts are automated reference deployments for key enterprise workloads on the Amazon Web Services (AWS) cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

You can now use Amazon CloudWatch to monitor average and aggregate CPU and memory utilization of running tasks as grouped by Cluster or Service. You can use these metrics for many purposes, for example:

DynamoDB is now integrated with Elasticsearch, enabling you to perform full-text queries on your data. Elasticsearch is a popular open source search and analytics engine designed to simplify real-time search and big data analytics. Elasticsearch integration is easy with the new Amazon DynamoDB Logstash Plugin. Logstash is an open source data pipeline that works together with Elasticsearch to help you process logs and other event data. Full-text queries unlock new possibilities with your data stored in DynamoDB. Boost ad relevance by serving ads with matching search terms, tags, or descriptive keywords. Add search to mobile apps using DynamoDB content, such as messages, locations, and photo tags. Query IoT sensor status using a text search for the condition signature. Discover usage patterns in app telemetry.

With the introduction of Free Tier Usage Monitoring, all Free Tier-eligible customers can now view their month-to-date and month-end forecasted usage for Free Tier-eligible services. Usage data for your top five services (i.e., services in which current usage is closest to the Free Tier usage limits) are presented on the Billing Console Dashboard, while usage data for all Free Tier-eligible services are available by clicking the “View all” link. Free Tier data includes current and forecasted usage, Free Tier limits, and a percentage of the Free Tier limit used for each service. If your current or forecasted usage exceeds the service-specific Free Tier limit, it will be shown in red and provide a link to contact customer service and/or learn more about Free Tier limits.

You can now launch larger, more cost-efficient R3 instances and low-cost T2 instances when using Amazon RDS for Oracle. R3 instances are optimized for memory-intensive applications, have the lowest cost per GiB of RAM among Amazon RDS instance types, and are recommended for high performance database workloads. R3 instances deliver higher sustained memory bandwidth with lower network latency and jitter at prices up to 28% lower than comparable M2 database instances. T2 instances are the low-cost standard instances designed to provide a baseline level of CPU performance with the ability to burst above the baseline. These instances are recommended for workloads that do not use the full CPU often or consistently, but occasionally need to burst to higher CPU performance, such as small database workloads in test and development environments. To learn more about the benefits of the R3 and T2 instance families on Amazon RDS, visit the DB Instance Classes page.

You can now use Amazon Simple Workflow (SWF) to trigger your AWS Lambda functions. AWS Lambda is a compute service that runs your code in response to triggers and automatically manages the compute resources for you.

We are pleased to announce the availability of three new learning paths designed to teach you how to work with Microsoft technologies running in the AWS cloud. Self-paced labs help you get hands-on training in a live practice environment. The new learning paths, or qwikLABS “quests," are designed to help you learn about Microsoft corporate apps, such as Exchange and SharePoint; databases, including Microsoft SQL Server and Amazon RDS for SQL; and system administration tools for deploying and managing Microsoft Windows-based environments in the AWS cloud. Labs are developed by AWS subject matter experts and are available online through qwikLABS.com. Visit AWS for Windows Labs to learn more and get started.

AWS CloudHSM is now available in the AWS GovCloud (US) Region. AWS CloudHSM provides dedicated Hardware Security Module (HSM) appliances within the AWS cloud, helping you meet corporate, contractual and regulatory compliance requirements for data security. AWS CloudHSM is designed to enable you to maintain complete control over the use of encryption keys stored on HSM appliances.

Over the past year, we've welcomed many amazing developer experts to AWS Community Heroes. These hard - working folks were selected for the program because they routinely deliver high - quality, impactful, developer - focused activities to the AWS Community.

Amazon CloudFront is now included in the set of services that are compliant with the Payment Card Industry Data Security Standard (PCI DSS) Merchant Level 1, the highest level of compliance for service providers.

PCI DSS compliance is a requirement for any business that stores, processes, or transmits credit card data. Amazon CloudFront's PCI compliance now makes it easier for retail e-commerce, travel booking, ticket sale, or in-app purchase applications to integrate Amazon CloudFront as a part of their architecture and adhere to PCI DSS. Because Amazon CloudFront supports dynamic and static content delivery, customers such as e-commerce businesses can use the same secure service for whole site delivery to accelerate both the browsing and shopping cart experience for their site visitors.

Amazon EC2 Container Service (ECS) is now available in the US West (N. California) region.

Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs and availability requirements. You can also integrate your own scheduler or third-party schedulers to meet business or application specific requirements.

Amazon EC2 Container Service is currently available in the US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions. Please visit our product page for more information.

You can now test iOS, Android and Fire OS apps against a large (and growing) collection of real phones and tablets without the complexity and expense of deploying and maintaining your own device labs and automation infrastructure. With AWS Device Farm, you simply upload your app to test it on your choice of real devices from a fleet of unique device/OS combinations, and within minutes you receive detailed reports that pinpoint bugs and performance problems.

We are excited to announce two frequently requested Amazon S3 usability enhancements:

Bucket Limit Increase: You can now increase your Amazon S3 bucket limit per AWS account. All AWS accounts have a default bucket limit of 100 buckets, and starting today you can now request additional buckets by visiting AWS Service Limits.

Read-after-write Consistency: Amazon S3 now supports read-after-write consistency for new objects added to Amazon S3 in US Standard region. Prior to this announcement, all regions except US Standard supported read-after-write consistency for new objects uploaded to Amazon S3. With this enhancement, Amazon S3 now supports read-after-write consistency in all regions for new objects added to Amazon S3. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3.

You can now use AWS OpsWorks to provision and manage Amazon EC2 Container Service (ECS) container instances running Ubuntu 14.04 LTS or Amazon Linux 2015.03. Previously, you had to manually perform routine tasks such as installing system and package updates or configuring EBS volumes. You can now use OpsWorks to streamline these tasks for you. Learn more about how to provision and manage container instances using AWS OpsWorks here. To learn more about how to run ECS tasks on container instances that have been provisioned by OpsWorks, read the ECS Getting Started Guide.

You can now run SQL Server Enterprise Edition as a License Included offering on Amazon RDS. The License Included offering for the Enterprise Edition is available on R3.2xlarge, R3.4xlarge, and R3.8xlarge instance types, in the US-East (Virginia), US-West (Oregon) and Europe (Ireland) regions. In the License Included service model, you do not need separately purchased Microsoft SQL Server licenses. License Included pricing is inclusive of software license, underlying hardware resources, and Amazon RDS management capabilities. This allows you to pay for the hours you use and change your instance type as needed to fit your workloads.

The AWS Mobile SDK for Xamarin is available in Developer Preview. Now you can easily deliver cross-platform iOS, Android, and Windows applications created with Xamarin that leverage AWS Services. With AWS Mobile SDK for Xamarin, you can simply connect your Xamarin-built apps to AWS for identity management through Amazon Cognito, cloud storage via Amazon S3, a fully-managed NoSQL database with Amazon DynamoDB, mobile push notifications with SNS, and app analytics through Amazon Mobile Analytics.

The AWS Mobile SDK for Xamarin is included in the AWS SDK for .Net. To learn how to start using the SDK, please read our getting started guide.

Originally launched with 10 services in 2009, the AWS SDK for .NET has been offered as a single package that includes support for all AWS services. Since that time, the number of supported AWS services has grown to 52, and the size of the SDK has become larger. We began to hear more customer requests for splitting the SDK into smaller per-service modules, and this was our primary motivation to start the Version 3 SDK, which has been in preview since April this year.

You can now configure Amazon Redshift to automatically copy snapshots of your KMS-encrypted clusters to another region of your choice. By storing a copy of your snapshots in a secondary region, you have the ability to restore your cluster from recent data if anything affects the primary region. For details on how to enable automatic cross-region backups for your KMS-encrypted clusters, refer to the Snapshots section of the Amazon Redshift management guide.

You can use AWS to build applications that are compliant with the US Health Insurance Portability and Accountability Act (HIPAA), using services that are covered under the AWS Business Associate Agreement (BAA). The AWS BAA now covers three new services: Amazon RDS (relational databases; MySQL and Oracle engines only), Amazon DynamoDB (NoSQL database), and Amazon EMR (big data processing).

Amazon Aurora is now available to all customers. Amazon Aurora is a MySQL-compatible relational database management system (RDBMS) that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial RDBMS while delivering similar performance and availability.

Starting today, Spot fleet automatically launches new Spot instances into the lowest priced Availability Zone (AZ) when you specify multiple VPC subnets or AZs in the Spot fleet launch specifications. Previously, when you required Spot fleets in the cheapest among a specific set of subnets, you had to determine the AZ with the lowest Spot price and request your Spot fleet in the corresponding subnet.

You can now join Linux instances running on Amazon EC2 to Simple AD directories from AWS Directory Service. This enables you to log in to all of your EC2 instances with a single set of domain credentials (no key pair needed) and set access controls, allowing you to control which users can access particular instances.

You can now make consistent reads using the Scan operation. This means that the Scan operation can include changes from all writes that are acknowledged before the Scan operation is started. Scanning a table with consistent reads makes it easy to include the latest updates on items when backing up or replicating your DynamoDB table. To make consistent reads using the Scan operation, set the ConsistentRead parameter in the Scan API call to true. Please read our documentation to learn additional details.

We have released AWS Solutions Training for Partners, a free one-day training designed to teach partners techniques and best practices for AWS solutions that solve for customer challenges. This first topic in this series is called “Foundations”, and it focuses on how AWS services address customer business priorities around costs, agility, security, compliance, innovation, growth and rapid global scale. We also cover Big Data and AWS, as well as discuss general pricing examples and TCO analysis. This in-person training compliments and extends concepts from AWS Business Professional and AWS TCO and Cloud Economics, two free and online course available to all APN Partners. More information is available at Partner Training.

You can now import your Swagger API definitions into Amazon API Gateway. The Swagger importer tool allows you to easily create and deploy new APIs as well as update existing ones using Amazon API Gateway.

Amazon RDS resource tags allow you to add metadata and apply access policies to your Amazon RDS resources. Starting today, you can allow tags set on your database instances to be automatically copied to any automated or manual database snapshots that are created from your instances. This allows you to easily set metadata, including access policies, on your snapshots to match the parent instance. You may enable this functionality while creating a new instance, or on an existing database instance. You may also choose to disable it at a future date. Once enabled, tags can be copied to all future copies of a snapshot, including cross-region snapshots. Learn more about enabling copying tags to DB snapshots.

Amazon RDS resource tags allow you to add metadata and apply access policies to your Amazon RDS resources. Starting today, you can allow tags set on your database instances to be automatically copied to any automated or manual database snapshots that are created from your instances. This allows you to easily set metadata, including access policies, on your snapshots to match the parent instance. You may enable this functionality while creating a new instance, or on an existing database instance. You may also choose to disable it at a future date. Once enabled, tags can be copied to all future copies of a snapshot, including cross-region snapshots.

Amazon DynamoDB now supports cross-region replication, a new feature that automatically replicates DynamoDB tables across AWS regions. You can use cross-region replication to build globally distributed applications with lower-latency data access, better traffic management, easier disaster recovery, and easier data migration. There is no additional charge for using the cross-region replication application. You only pay for Amazon DynamoDB resources for the replica tables, reading from Streams, the SQS queue, and the EC2 instance that runs the cross-region application. You can get started today by reading How to Set Up Cross-Region Replication in the Amazon DynamoDB Developer Guide.

We are excited to announce that you can now ingest AVRO files directly into Amazon Redshift. Use the COPY command to ingest data in AVRO format in parallel from Amazon S3, Amazon EMR, and remote hosts (SSH clients). For details, refer to the data ingestion section of the documentation.

AWS Lambda is now integrated with Amazon API Gateway allowing you to create custom RESTful APIs that trigger Lambda functions. With API Gateway, you can create and operate APIs for backend services without developing and maintaining infrastructure to handle authorization and access control, traffic management, and monitoring and analytics. Learn more about Amazon API Gateway here.

AWS CodePipeline is now available to all customers. AWS CodePipeline is a continuous delivery service for fast and reliable application updates. CodePipeline builds, tests, and deploys your code every time there is a code change based on the release process models you define. This enables you to rapidly and reliably deliver features and updates. You can easily build out an end-to-end solution by using our pre-built plugins for popular third-party services like GitHub or integrating your own custom plugins into any stage of your release process. With AWS CodePipeline, you only pay for what you use. There are no upfront fees or long-term commitments.

Click here to learn how to get started with AWS CodePipeline. For more information about AWS CodePipeline, please visit our product page.

You can now test your Android and Fire OS apps against a large (and growing) collection of real phones and tablets without the complexity and expense of deploying and maintaining your own device labs and automation infrastructure. Simply upload your apps on AWS Device Farm and it runs your tests on a fleet of unique device/OS combinations. Within minutes you receive detailed reports that pinpoint bugs, performance problems, and other issues. You can also automatically initiate tests by integrating with your continuous integration tools such as Jenkins.

There is no setup cost to get started. And your first 250 device minutes are free. Learn more »

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

AWS CodeCommit is now available to all customers. AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with your existing Git tools.

Click here to learn how to get started with AWS CodeCommit. For more information about AWS CodeCommit, please visit our product page.

Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as applications running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any web application. Amazon API Gateway handles all of the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.

Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out, with caching as optional. To learn more and try Amazon API Gateway for free, visit our website to start building and deploying APIs, today.

Software as a Service (SaaS) delivery is the mechanism to deliver continuous innovation. The AWS platform provides the low cost, reliable and secure way to deliver single and multi-tenant SaaS solutions.

The launch of the global AWS SaaS Partner Program provides a series of benefits designed to help you build and grow SaaS solutions on AWS. As your business evolves, AWS will be there to provide the business and technical enablement support you need.

The benefits are structured in a maturity lifecycle: Learn, Build, Grow. Below is an outline of the benefits that we offer at each stage of the Learn, Build, Grow lifecycle.

In the Learn stage, you have access to SaaS business and technical enablement content in the form of articles, whitepapers and refererence architecture. The SaaS Partner Program Webcasts provide short video overviews to recap and extend the business and technical enablement content.

The Build stage offers SaaS Office Hours. These are live sessions held with AWS business and technical leaders. You can also take advantage of the Innovation Sandbox. This benefit enables you to apply for credits to be used for new SaaS offering development and / or test environments.

The Grow Stage provides you with benefits such as The SaaS Test Drive, which connects the customer with you, the SaaS partner, by providing Test Drive Software as a Service (SaaS) based solutions on AWS. Test Drive enables your customers to quickly and easily explore the benefits of using your SaaS software on AWS in a pre-configured environment. The SaaS Pilot benefit enables approved SaaS Partners to request AWS Service credits to be used by your customers to "kick the tires" of your SaaS offering. This way, there is a lower barrier of entry for your customer to try your offering. The AWS Marketplace SaaS Listings benefit enables you to list your SaaS offerings on AWS Marketplace. This provides a channel for you to acquire new customers, grow revenue and drive awareness about your SaaS offerings.

Overall, the AWS SaaS Partner Program is designed to help you in every stage of your SaaS business. To learn more or apply, visit the AWS SaaS Partner Program page.

Amazon Glacier Vault Lock allows you to easily set compliance controls on individual Glacier vaults and enforce them via a lockable policy. You can specify controls such as “undeletable records” or “time-based data retention” in a Vault Lock policy and lock the policy from future edits. Once locked, the policy becomes immutable and Glacier will enforce the prescribed controls to help achieve your compliance objectives.

Amazon SES now enables you to authorize other AWS accounts and IAM users to send emails from your identities (email addresses and domains). With this feature, called sending authorization, you maintain complete control over your identities through the use of policies that expressly list who may send email from an identity, and under which conditions. For example, as a business owner, you might use sending authorization to enable an email marketing company to send marketing emails from an email address under your domain name, but only over a specific period of time.

The AWS Partner Network (APN) is thrilled to announce the launch of the new APN Security Competency. The APN Competency Program is designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas and categories.

You can now use the UDP protocol with containers on Amazon EC2 Container Service (ECS). Docker enables container network connectivity by supporting the ability to expose a container port to a host port. Previously, Amazon ECS only supported TCP ports in task definitions. Now, you can also define UDP ports in your task definitions allowing you to use whichever protocol (i.e., TCP or UDP) your applications need. Learn more here.

Optimizing the size of JavaScript payload is an important part of tuning the load time of a web application. As AWS spans many web services and provides a very large API surface area, serving a build of the AWS SDK for JavaScript with support for services you are not using in your application can be inefficient.

Hands-on experience and on-the-job skills are the most helpful tools in preparing for AWS Certification exams. We now offer two self-paced lab learning paths to help you get hands-on practice with AWS services addressed in exams. The following “Learning Quests” are now available on the qwikLABS platform:

Cost Explorer provides you with interactive graphical reports, designed to make it easier for you to view, understand, and control your AWS costs. The data behind these reports is updated daily, so that you can view the most up-to-date information about your costs.

You can now simplify the task of managing costs on AWS with Budgets. Starting today, you can define a monthly budget for your AWS costs—whether at an aggregate cost level (i.e., “all costs”) or further refined to include only those costs relating to specific cost dimensions or groups of cost dimensions, including Linked Account, Service, Tag, Availability Zone (“AZ”), Purchase Option (e.g., Reserved), and/or API Operation.

Alexa is the cloud-based voice service that powers Amazon Echo, a new category of device designed around your voice. AWS Lambda is now integrated with the Alexa Skills Kit, a collection of self-service APIs, tools, documentation and code samples that make it fast and easy for you to create new voice-driven capabilities (or “skills”) for Alexa. You simply upload the Lambda function code for the new Alexa skill you are creating, and AWS Lambda does the rest, executing the code in response to Alexa voice interactions and automatically managing the compute resources on your behalf. No prior experience with speech recognition or natural language understanding is required—Amazon does all the work to hear, understand, and process the voice interactions.

City on a Cloud Launches at the 2015 Government, Education and Nonprofits Symposium in Washington, D.C. Through our City on a Cloud Innovation Challenge, we will recognize local and regional governments that are hubs of innovation and driving technology solutions to help improve citizens' lives. Visit our map of the "City on a Cloud" to learn what cities can do with cloud computing or see the solutions deployed by last year's winners at: aws.amazon.com/stateandlocal/cityonacloud. For details, please email the WW PS Marketing team at: aws-wwps-marketing@amazon.com.

You can now launch R3 instances, the latest generation of Amazon EC2 memory optimized instances in the South America (Sao Paulo) AWS region. R3 offers the best price point per GiB of RAM and high memory performance. R3 instances are recommended for In-memory analytics like SAP HANA, high performance databases including relational databases and NoSQL databases such as MongoDB, and MemcacheD/Redis applications. R3 instances support Hardware Virtualization (HVM) Amazon Machine Images (AMIs) only.

R3 instances, powered by Intel Xeon Ivy Bridge processors, offer up to 32 vCPUs, 244 GiB of memory, and can deliver up to 150,000 4 KB random reads per second. R3 instances support Enhanced Networking for higher packet per second (PPS) performance, lower network jitter, and lower network latencies. Please refer to R3 technical documentation for additional reference. R3 instances are available in two instance sizes in South America (Sao Paulo) AWS region with following specification:

We are happy to announce support for searching your log events in the Amazon CloudWatch console. Also, you can now view log events across multiple log streams from a single log group in one place.

With this update, you can search for specific words or patterns in your logs to help you quickly find the information you need. For example, you can search for the word "Exception" in your application logs, HTTP 400 errors in your web request logs or a user name in your AWS CloudTrail logs. All of your log events can be searched no matter how old they are.

If you already have logs in Amazon CloudWatch Logs, you can get started immediately by clicking the "Search Events" button when viewing log streams in a log group in the CloudWatch console.

The AWS CodeDeploy agent is now available for Red Hat Enterprise Linux (RHEL) 7 and higher. This is in addition to Amazon Linux, Ubuntu Server, and Microsoft Windows Server operating systems that are currently supported. The CodeDeploy agent is also available as open source software here. Please see the documentation for more information on operating system support.

AWS Config is a fully managed service that gives you an inventory of your AWS resources, notifies you when the configurations of your resources change, and lets you audit the history of the configurations for those resources.

You can now set up AWS Config to record changes for specific resource types. You will only be charged for changes to resource types that are being recorded. Visit the Set Up page on the AWS Config console to start using this capability. By default, all supported resource types are recorded.

You can now manage updates of the agent software running on instances managed by AWS OpsWorks. Previously, OpsWorks automatically updated a stack to the latest released agent. Now, you can control whether you want to automatically or manually update the agent version of your stack. This allows you to test new agents before rolling it out to your production stacks.

New stacks you create will have the latest agent version installed, but the default agent configuration will be manual update. Your existing stacks have been updated to reflect this configuration. You will see a notification in the management console with a link to the change log when new versions of the OpsWorks agent are made available.

Read more about the new features in our documentation. For more information about AWS OpsWorks, please visit our product page.

Today, we are announcing support for Dell NetVault Backup 10.0 with AWS Storage Gateway-Virtual Tape Library (VTL). You can now backup and archive directly to scalable, cost-effective, secure Amazon S3 and Amazon Glacier storage using Gateway-VTL with NetVault Backup.

Amazon Glacier now allows you to tag your Glacier vaults for easier resource and cost management. Tags are labels that you can define and associate with your vaults, and using tags adds filtering capabilities to operations such as AWS cost reports. For example, you can use tags to allocate Glacier costs and usage across multiple departments in your organization or by any other categorization. You can then identify the department with the most Glacier storage costs or view the storage growth of a particular cost center over a period of time. To do this, simply add “Department” or “Cost Center” tags to your vaults and use the AWS Cost Allocation Reports tool to view a breakdown of costs and usage by tag.

Version 3 of the AWS SDK for Python, also known as Boto3, is now stable and generally available. Feedback collected from preview users as well as long-time Boto users has been our guidepost along the development process, and we are excited to bring this new stable version to our Python customers. Boto3 comes with the following key features:

Quick Starts are automated reference deployments for key enterprise workloads on the Amazon Web Services (AWS) cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

Trend Micro Deep Security is a host-based security product that provides intrusion detection and prevention, anti-malware, host firewall, file and system integrity monitoring, and log inspection modules in a single agent running in the guest operating system.

This Quick Start describes the recommended deployment of Trend Micro Deep Security version 9.5 into an Amazon Virtual Private Cloud (Amazon VPC) using Amazon Machine Images (AMIs) from the AWS Marketplace. In addition, the Quick Start includes sample code to show how you can use both the Amazon Elastic Compute Cloud (Amazon EC2) API and the Deep Security API to ensure that every instance running within your AWS environment is being protected. To get started, use the following resources:

You can now deregister task definitions that are no longer needed. Task definitions are a description of an application that contains one or more container definitions. You create new task definition revisions as you update your application. You can now deregister task definitions to remove outdated revisions. You can also sort task definitions from oldest to newest or newest to oldest, making it easy to find specific task definition revisions.

You can now create MySQL, PostgreSQL, and Oracle RDS database instances with up to 6TB of storage and SQL Server RDS database instances with up to 4TB of storage when using the Provisioned IOPS and General Purpose (SSD) storage types. Existing MySQL, PostgreSQL, and Oracle RDS database instances can be scaled to these new database storage limits without any downtime.

Users of large transactional databases and data warehouses can now run even larger, higher performance workloads on a single database instance, without needing to shard the data across multiple RDS instances. The new storage limit doubles the available storage for MySQL, PostgreSQL, and Oracle databases on RDS, and quadruples the former limit for RDS SQL Server. SQL Server users can now also provision up to 20,000 input/output operations per second (IOPS), double the former limit of 10,000 IOPS.

You can now choose to pay your AWS bill with Canadian dollars. If your credit card provider currently charges you expensive fees for converting currency, choosing to be billed in your preferred currency may help you to reduce these costs. You can compare our rates, which are displayed on the Account Settings page of the AWS Billing Console, with your credit card statements to determine if using our currency conversion service would benefit you.

To take advantage of this new feature, all you need to do is to specify you would like to be billed in Canadian dollars using the AWS Billing Console. Once you've specified your preferred payment currency, your future monthly invoices will be converted to this currency. We'll display Canadian dollars in the billing console, and you can always change your preferred currency at any time.

Amazon CloudFront now lets you configure a Default TTL and a Maximum time-to-live (Max TTL) to specify how long CloudFront caches your objects in edge locations. These new settings enhance the control over cache duration that you already had with the Minimum TTL setting (Min TTL). You can learn more about how to set these granular caching rules here.

Quick Starts are automated reference deployments for key enterprise workloads on the Amazon Web Services (AWS) cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

The AWS Partner Network (APN) is thrilled to announce the launch of the new APN Marketing & Commerce Competency. The APN Competency Program is designed to highlight APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas and categories.

You can now launch the t2.large, the newest Amazon EC2 burstable-performance instance size. The t2.large features 8 GiB of system memory and 2 vCPUs, making it well suited for workloads that require a consistent baseline of performance, the ability to burst, and more memory than is available in other T2 instances. With an On-Demand Instance price of $0.104 per hour, the t2.large is the lowest-cost Amazon EC2 instance for workloads that need 8 GiB of system memory. To learn more about Amazon EC2 T2 instances, visit the Amazon EC2 instance page.

Many applications such as web servers, developer environments and databases don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when they need them. T2 instances are engineered specifically for these use cases. T2 instances are backed by the latest Intel Xeon processors with clock speeds up to 3.3 GHz. They also work well with Amazon EBS General Purpose (SSD-backed) volumes.

T2 instances can be purchased as On-Demand Instances and Reserved Instances. The t2.large is available in the US East (N. Virginia), US West (Oregon), US West (San Francisco), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Australia (Sydney), Brazil (Sao Paolo), China (Beijing), and GovCloud (US) regions.

You can now run multiple Java applications on a single EC2 instance through an AWS Elastic Beanstalk Tomcat environment. Previously, you needed to create an environment for each Java application. Now, you can bundle multiple WAR files into a single ZIP file and deploy it to one Elastic Beanstalk environment. You can optimize the cost of operating several applications by letting each application share a load balancer. Please see our documentation for more information on this feature.

We had also added support for the new EC2 M4 instances to Elastic Beanstalk platforms in US East (N. Virginia), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney). To learn more about M4 instances, visit the Amazon EC2 Instances page.

Apache Spark is now supported on Amazon EMR. Similar to Apache Hadoop, Apache Spark is an open-source, distributed processing system commonly used for big data workloads. Spark utilizes in-memory caching and optimized execution for fast performance, and it supports batch processing, streaming, machine learning, graph databases, and ad hoc queries. With support for Scala, Python, Java, and SQL (using the Spark SQL module), Amazon EMR makes it easy to develop Spark applications in many popular languages. Also, Spark includes several libraries to help build applications for machine learning (MLlib), stream processing (Spark Streaming), and graph processing (GraphX). You can install Spark alongside the other Hadoop applications available in Amazon EMR and leverage the EMR File System (EMRFS) to directly access data in Amazon S3.

You can create an Amazon EMR cluster with Apache Spark from the AWS Management Console, AWS CLI, or SDK by choosing AMI 3.8.0 and adding Spark as an application. Amazon EMR currently supports Spark version 1.3.1 and utilizes Hadoop YARN as the cluster manager. To submit applications to Spark on your Amazon EMR cluster, you can add Spark steps with the Step API or interact directly with the Spark API on your cluster’s master node. To learn more, visit the Apache Spark on Amazon EMR page. For instructions on how to launch an Amazon EMR cluster with Spark, click here.

Now you can run Amazon EC2 for Windows with SQL Server Enterprise Edition. You can select pre-configured Amazon Machine Images (AMI) and launch them on R3.2xlarge, R3.4xlarge, and R3.8xlarge instance types, in the US-East (Virginia), US-West (Oregon) and Europe (Ireland) regions.

Microsoft SQL Server Enterprise Edition offers a number of new features including:

“AlwaysOn” high availability: You can configure up to four active, readable secondaries

Self-service business intelligence: You can use Power View to conduct interactive data exploration and visualization

Data Quality Services: You can use organizational and 3rd party reference data to profile, cleanse and match data

You can now take advantage of the new Reserved Instances (RI) payment options on Amazon RDS. This simpler payment-based model consolidates reserved instances into one RI type, while introducing three payment options: No Upfront, Partial Upfront and All Upfront RIs, offering you more flexibility to decide on how you would like to pay for your Reserved Instances.

The No Upfront payment option does not require an upfront payment and provides a substantial discount (typically about 30%) compared to On-Demand. This option is offered with a one-year term. The Partial Upfront payment option balances the payments of an RI between upfront and hourly, and replaces the previous Heavy Utilization RI. You pay for a portion of the Reserved Instances upfront, and then pay for the remainder over the course of the one- or three-year term. This option provides a high discount (typically about 60% for a 3 year term) compared to On-Demand. The All Upfront payment option allows you to pay for the entire Reserved Instances term (one- or three-year) with one upfront payment and benefit from the largest discount (typically about 63% for a 3 year term) compared to On-Demand.

Amazon RDS supports these new payment options on all RDS supported engines including MySQL, PostgreSQL, Oracle and SQL Server, except for the SQL Server-License Included, in the following regions: US East (Northern Virginia), US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (Sao Paulo). Effective August 15th, 2015, RDS will no longer offer new Light and Medium Utilization RIs. If you already own RIs, the availability of these new payment options will not affect any RIs that you have already purchased. For more information on these new cost saving options, please visit the RDS pricing page.

You can now send push notifications to Mac OS desktops. We have also added support for sending notifications to VoIP apps on Apple iOS devices. You can use this feature from the Amazon SNS console or APIs. To enable this feature, simply obtain a VoIP or Mac OS push certificate from Apple and assign it to new platform applications in SNS. For more information, please visit our FAQs.

You can now develop your AWS Lambda function code using Java. AWS Lambda is a compute service that runs your code in response to events, such as S3 uploads and Kinesis updates, and automatically manages the compute resources for you. You can use your existing tools such as Eclipse to author your Java code and Maven for packaging your Java code, making it easy to integrate Lambda development into your existing development processes. You can use any Java libraries, as well as your own Java objects for processing data. You simply upload your Lambda Java artifact as a ZIP or JAR through the AWS CLI or AWS Lambda console to deploy your Java code as a Lambda function. AWS Lambda will run your code only when it’s needed and scales automatically, so there is no need to provision or continuously run servers. Read our documentation for more details. Visit our console to get started now.

You can now launch M4 instances, the latest generation of Amazon EC2 General Purpose instances. M4 instances provide a balance of compute, memory, and network resources. They are well suited for a variety of workloads, such as web servers, databases, caching fleets, and enterprise applications. M4 instances are available in five sizes with up to 40 vCPUs and 160 GiB of system memory.

M4 instances are based on custom Intel Xeon E5-2676 v3 Haswell processors that are optimized specifically for EC2. These processors run at a base frequency of 2.4 GHz and can deliver clock speeds as high as 3.0 GHz with Intel Turbo Boost. Intel Haswell processors include AVX2, which can provide significant performance improvement for workloads that take advantage of Intel AVX or Intel SSE.

M4 instances are EBS-optimized by default at no additional cost. EBS Optimization provides 450 Mbps to 4,000 Mbps of dedicated throughput to EBS depending on the instance type in addition to the network throughput provided to the instance. M4 also features Enhanced Networking. M4 instances with Enhanced Networking deliver up to four times the packet rate of instances without Enhanced Networking, while ensuring consistent latency, even when under high network I/O. Within Placement Groups, Enhanced Networking reduces average latencies between instances by 50 percent or more.

You can purchase M4 instances as On-Demand Instances, Reserved Instances, or Spot Instances. M4 instances are available in the US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), EU (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS regions, with more regions coming soon.

You can now view the Docker and Amazon ECS agent version running on each Amazon ECS Container Instance and update the agent to the latest version using the Amazon ECS console, AWS CLI, or AWS SDK. To get started, see the documentation.

Amazon VPC Flow Logs is a new feature that allows you to log traffic flows at network interfaces in your Virtual Private Cloud (VPC). You can now create a Flow Log on a VPC, a subnet or an Elastic Network Interface (ENI) in your account. Once created, the Flow Log will capture accepted and rejected traffic flow information for all network interfaces in the selected resource.

We are excited to announce that AWS CloudTrail is now available in Beijing, China region. AWS customers can use CloudTrail to troubleshoot operational and security incidents, track changes to AWS resources and demonstrate compliance with internal or external policies. Customers can look up API calls in their AWS account and answer important questions such as Who made a particular API call, When was the API call made, What is the source IP address of the API call and Which resources were acted upon in the API call.

CloudTrail records API activity and delivers it to an S3 bucket, so log files are stored inexpensively and durably. Customers can specify their own S3 object expiration policies or life cycle configuration policies and move the log files to Glacier for longer term retention.

You can now launch Amazon Redshift clusters on second-generation Dense Storage (DS2) instances. DS2 has twice the memory and compute power of its Dense Storage predecessor, DS1 (formerly DW1), and the same storage capacity. DS2 also supports Enhanced Networking and provides 50% more disk throughput than DS1. On average, DS2 provides 50% better performance than DS1, but is priced the same as DS1. To move from DS1 to DS2, simply restore a DS2 cluster from a snapshot of a DS1 cluster of the same size.

AWS CloudHSM provides dedicated Hardware Security Module (HSM) appliances within the AWS cloud, helping you meet corporate, contractual and regulatory compliance requirements for data security. CloudHSM is designed to enable you to maintain complete control over the use of encryption keys stored on CloudHSM appliances.

AWS CloudHSM is also available in US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Sydney) regions. Please visit our product page for more information.

We are happy to announce the immediate availability of Amazon CloudWatch Logs subscriptions.

With subscriptions, you can access a near-real time feed of the log events being delivered to your CloudWatch Logs log groups. The log events are delivered to an Amazon Kinesis stream that you provide so that you can perform your own custom processing. Log events delivered to the Kinesis stream remain stored and available in CloudWatch based on your log group retention settings.

The AWS SDK for Go is now in Developer Preview. Since we brought full protocol and service coverage to the master branch of the project in March, we have introduced a number of improvements to the SDK including:

Upload/download manager for Amazon S3

Getting started guide

Better error handling

Complete API documentation

Better credential management

Increased test coverage

Starting today, the AWS SDK for Go will be updated at the same frequency as all other official AWS SDKs to provide our customers timely access to all service API updates. As always, we welcome your feedback on GitHub.

You can now create larger stored volumes (up to 16 TB) with AWS Storage Gateway. Gateway-Stored volumes enable durable and inexpensive off-site backups that can be recovered on-site to your hardware or within Amazon EC2. They provide low-latency access to your data, and asynchronously back up to Amazon S3 in the form of Amazon EBS snapshots. With the support for larger volumes (up from 1 TB), you will spend less time on maintenance and operational tasks like splitting datasets, striping smaller volumes, or coordinating backups across multiple volumes.

As of June 3, 2015, an updated virtual machine (VM) image is available for download from the AWS Management Console. For all existing gateways a software update has been made available containing this enhancement and other important maintenance items. It can be applied manually through the Console or automatically during the gateway’s next scheduled weekly maintenance time.

Amazon WorkDocs is now available in the Asia Pacific (Tokyo) AWS region. This additional location will reduce latency for many users, and provide more flexibility in complying with regulatory requirements that govern where data must be stored.

Amazon WorkDocs is now available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), Asia Pacific (Singapore) and Asia Pacific (Tokyo) AWS regions. Users can access Amazon WorkDocs from anywhere in the world regardless of which AWS region is chosen by the administrator.

Amazon WorkDocs is available in English, Japanese, French, Brazilian Portuguese, Korean, German, Chinese Simplified, and Spanish. The language will automatically be set based on the user’s operating system setting when using the WorkDocs sync client and WorkDocs mobile clients, and set based on the browser’s language setting for the WorkDocs web client.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at a massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

The Amazon Kinesis team has announced PUT pricing change and two new capabilities.

You now have the ability to run Hadoop jobs in parallel on your Amazon Elastic MapReduce (Amazon EMR) clusters from AWS Data Pipeline, enabling you to significantly increase the utilization of your cluster. Using HadoopActivity, you can choose a fair scheduler or capacity scheduler on your Amazon EMR cluster and submit work to the cluster. HadoopActivity allows you to take advantage of scheduler pools on the cluster and assign jobs to specific queues. It provides job level monitoring, direct access to the Hadoop logs and the ability to cancel and re-run a single job. To learn more about using HadoopActivity, please visit our documentation.

Additionally, you can now use Spot instances, for the core Amazon EMR nodes, specify an availability zone and configure custom security groups for your Amazon EMR cluster launched via AWS Data Pipeline. To learn more about the configuration options on the EMRCluster object, please visit our documentation here.

CodeDeploy environment variables can be accessed by deployment scripts and provide contextual information associated with the deployment such as application name, deployment group name, lifecycle event name, and deployment ID. Common use cases for environment variables include varying the application server port or the logging level based on whether the script is running in the “Staging” or “Production” deployment group. Read our blog for a walkthrough on using environment variables.

Amazon SNS makes it easy and cost effective to send push notifications to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud Push. With the support of Amazon SNS in the AWS SDK for Unity, you can now send push notifications directly to your games that are built on Unity.

You can now deploy AWS Lambda function code by specifying the Amazon S3 bucket where your code is located. Previously, you needed to upload your function code as a ZIP through the AWS CLI, AWS SDKs, or AWS Lambda console. Now, you can also provide a S3 object location for Lambda to download your code when creating or updating your function.

Please visit our product page for more information about AWS Lambda. Visit our console to get started now.

Amazon DynamoDB is enhancing security and visibility of Amazon DynamoDB API calls by adding support for AWS CloudTrail. AWS CloudTrail is a service that delivers log files containing records of your AWS API calls. You can use these logs for security analysis, auditing and resource tracking. Starting today, you can receive logs for Amazon DynamoDB API calls, when you turn on AWS CloudTrail for AWS services from the AWS CloudTrail console. You can learn more about AWS CloudTrail from the AWS CloudTrail details page and our documentation page. If you have any questions or feedback, email us.

The AWS Console mobile app now supports S3 object download. In addition, now EC2 volumes, EC2 security groups, and VPCs are on the app’s dashboard for easier access. Download the app from Amazon Appstore, Google Play, and iTunes to view and manage your resources on the go.

Amazon S3 customers can now use the app to generate and open a pre-signed URL for an S3 object. When viewing an object, tap “View in browser” to open a time-limited pre-signed URL. Your device’s browser will determine supported actions for the object based on the object’s content type and your device configuration.

EC2 volumes, EC2 security groups, and VPCs are now on the dashboard for easier access. Previously, these resource types were only accessible from an attached EC2 instance’s detail page. The app lets you create a snapshot of an EBS volume and modify EC2 security group rules.

The app features support for EC2, S3, Route 53, ELB, RDS, AWS Elastic Beanstalk, CloudFormation, DynamoDB, Auto Scaling, AWS OpsWorks, CloudWatch, and the Service Health Dashboard. The app lets you authenticate several identities, so you can easily switch between multiple accounts.

Let us know how you use the AWS Console app and tell us what features you’d like to see by using the feedback link in the app’s menu.

Version 3 of the AWS SDK for PHP is now generally available. Since launching the developer preview in October, we have added new features and architecture improvements, while maintaining much of the coding pattern with which users of the Version 2 are already familiar. Key features of the new version include:

Amazon CloudFront’s invalidation feature, which allows you to remove an object from the CloudFront cache before it expires, now supports the * wildcard character. You can add a * wildcard character at the end of an invalidation path to remove all objects that match this path. In the past, when you wanted to invalidate multiple objects, you had to list every object path separately. Now, you can easily invalidate multiple objects using the * wildcard character.

Today, AWS Identity and Access Management (IAM) added support for roles in the policy simulator, enabling you to test the effects of your roles’ access control policies. You can now use the policy simulator to troubleshoot permissions and verify access for managed or inline policies attached to a role in your account. Additionally, you can use the policy simulator as a “playground” to help you author least privilege policies for your roles. The policy simulator helps you to author and validate your policies by enabling you to simulate a single policy or a combination of policies.

You can now generate XDCAM-compatible video and FLAC audio using Amazon Elastic Transcoder. XDCAM is popularly used both as a mezzanine format and as a format for archival purposes in professional video production workflows. FLAC is a lossless compression format typically used as an intermediate step in an audio processing workflow. You can try out these formats by using the newly available system presets.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

AWS CloudTrail integration with Amazon CloudWatch Logs is now available in Northern California region. With this integration, you can monitor for specific API activity and receive email notifications when those specific API calls are made.

AWS CloudFormation has expanded the set of supported AWS parameter types. Parameters allow you to create templates that are customized each time you create a stack. You can now use Availability Zone, EC2 Instance Id, Image Id, Security Group Name, Volume Id, and Route 53 Hosted Zone Id as parameter types in addition to the parameter types already supported. Read more here.

You can now simplify the task of running a cost effective fleet of Amazon EC2 Spot Instances by targeting multiple instance types in a single Spot fleet request. Starting today, you can include up to 20 launch specifications using the new RequestSpotFleet action. Previously, each Spot Instance request could accept only one instance type with a single launch specification.

Including multiple instance types in your fleet request can help lower your operational costs and improve your application availability. For example, if your application typically runs on c4.xlarge compute-optimized instances, but also performs effectively on c3.2xlarge, m3.2xlarge, and c4.2xlarge instances, you can target a broader set of instance types to launch your fleet in the lowest-priced pools.

You can request a fleet of Spot Instances into EC2-Classic, your default VPC, or an individual VPC subnet, in all regions except the AWS GovCloud (US) region and on all instance types available as Spot Instances today. To get started, request a fleet using the AWS SDKs or Command-Line Tools.

You can now use AWS OpsWorks with Amazon EC2 instances running Windows Server (2012 R2). OpsWorks is a service that helps you automate operational tasks like software installation and configuration, code deployment, and service discovery using Chef. OpsWorks gives you the flexibility to define your application architecture and resource configuration and handles the provisioning and management of your AWS resources for you. Click here to learn more about OpsWorks.

You can now easily join your Amazon EC2 for Windows instances to a domain that you have configured with AWS Directory Service in two additional regions, including EU (Ireland) and US West (Oregon). You can join an instance to an existing, on-premises Active Directory, using AD Connector, or a stand-alone, Simple AD directory running in the AWS Cloud. Once you configure this new feature using the AWS Management Console or the EC2 API, you can choose which domain a new instance will join when it launches. For existing instances, you can use the EC2 API to seamlessly join them to a domain.

AWS Directory Service is a managed service that allows you to connect your AWS resources with an existing on-premises Microsoft Active Directory or to set up a new, stand-alone directory in the AWS cloud. Connecting to an on-premises directory is easy and once this connection is established, all users can access AWS resources and applications with their existing corporate credentials. You can also launch managed, Samba-based directories in a matter of minutes.

You can now scale AWS Elastic Beanstalk environments on a defined schedule. Previously, you could scale your Amazon EC2 instances managed by Elastic Beanstalk based on user-defined triggers such as CPU utilization or bandwidth usage. Now, you can also scale your Amazon EC2 instances based on user-defined times. Time-based scaling helps you more efficiently and easily plan for predictable load changes based on your traffic patterns. You can proactively scale your application in advance of expected traffic through time-based scaling instead of waiting for your scaling action to be triggered by metrics. This improves your customers’ experience and helps you optimize cost.

APIs are now supported for the Simple AD and AD Connector directories in AWS Directory Service, enabling you to programmatically create and configure these directories via the AWS CLI and SDKs. API actions performed via the AWS SDK and CLI or the AWS Directory Service Console can be recorded via AWS CloudTrail, and permissions for performing these actions can be controlled via an AWS IAM policy. The APIs can be used in all AWS regions in which AWS Directory Service is available.

We are concluding the Developer Preview for the AWS Mobile SDK for Unity and making it Generally Available. The AWS Mobile SDK for Unity makes it easier for you to take advantage of AWS services for your games built in Unity. Specifically, the AWS Mobile SDK for Unity supports Amazon DynamoDB for NoSQL database, Amazon S3 for storage, Amazon Cognito for user identity management and data synchronization, and Amazon Mobile Analytics for tracking user and device based analytics. The AWS Mobile SDK for Unity is compatible with Unity 4.0 and onwards, and supports both the Free and Unity Pro versions.

To download the AWS SDK for Unity, visit our webpage. For more information, read our blog.

The AWS WW Public Sector team just launched the AWS Educate program on May 14th. With the dramatically increasing demand for cloud employees and individuals entering the workforce with relevant experience, AWS Educate provides an academic gateway for that next generation of IT and Cloud professionals. AWS Educate is Amazon's global initiative to provide students and educators with the resources needed to greatly accelerate cloud-related learning endeavors and to help power the entrepreneurs, workforce, and researchers of tomorrow. There is no cost to join and AWS Educate provides hands-on access to AWS technology, training resources, course content and collaboration forums.

G2 Instances are now available in our EU (Frankfurt) region. G2 instances feature two sizes, the g2.2xlarge and g2.8xlarge. The g2.2xlarge is backed by a single high-performance NVIDIA GRID GPU, making it well suited for 3D visualizations or streaming graphics-intensive applications. The g2.8xlarge instance is backed by four high-performance NVIDIA GPUs, making it well suited for GPU compute workloads including large scale rendering, machine learning, transcoding, and other server-side workloads that require massive parallel processing power. With G2 instances, you can build high-performance DirectX, OpenGL, CUDA, and OpenCL applications and services without making expensive up-front capital investments.

G2 instances are also available in the US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS regions. Customers can launch G2 instances using the AWS console, Amazon EC2 command line interface, AWS SDKs and third party libraries. To learn more about G2 instances, visit http://aws.amazon.com/ec2/instace-types. To get started immediately, visit the AWS Marketplace for GPU machine images from NVIDIA and other Marketplace sellers.

You can now scale Amazon EC2 instances managed by AWS OpsWorks using custom CloudWatch alarms, and you can now choose EBS block device settings for your instances.

You can already scale your load-based instances based on CPU utilization, memory utilization, or load. You can now set custom CloudWatch alarms as thresholds for scaling. For example, you can use ‘Disk Reads’ or ‘Network In’ as metrics to scale up or down your load-based instances. See the documentation for more details.

You can now choose between General Purpose (SSD) and Magnetic EBS volume types when creating an instance using the OpsWorks console. You can also specify your own custom block device mapping using our API to support all variations of EBS volume types and other characteristics such as mount points, size, and provisioned IOPS. Custom block device mapping can also be leveraged to map additional local block devices available to certain instance types. See the documentation for more details.

You can use Interleaved Sort Keys to quickly filter data without the need for indices or projections in Amazon Redshift. A table with interleaved keys arranges your data so each sort key column has equal importance. While Compound Sort Keys are more performant if you filter on the leading sort key columns, interleaved sort keys provide fast filtering no matter which sort key columns you specify in your WHERE clause. To create an interleaved sort, simply define your sort keys as INTERLEAVED in your CREATE TABLE statement.

The performance benefit of interleaved sorting increases with table size, and is most effective with highly selective queries that filter on multiple columns. For example, assume your table contains 1,000,000 blocks (1 TB per column) with an interleaved sort key of both customer ID and product ID. You will scan 1,000 blocks when you filter on a specific customer or a specific product, a 1000x increase in query speed compared to the unsorted case. If you filter on both customer and product, you will only need to scan a single block.

The interleaved sorting feature will be deployed in every region over the next seven days. The new cluster version will be 1.0.921.

You can now access Amazon Simple Storage Service (Amazon S3) from your Amazon Virtual Private Cloud (Amazon VPC) using VPC endpoints. Amazon VPC endpoints are easy to configure and provide reliable connectivity to Amazon S3 without requiring an internet gateway or a Network Address Translation (NAT) instance. With VPC endpoints, the data between the VPC and S3 is transferred within the Amazon network, helping protect your instances from internet traffic.

Amazon VPC endpoints for Amazon S3 provide two additional security controls to help limit access to S3 buckets. You can now require that requests to your S3 buckets originate from a VPC using a VPC endpoint. Additionally, you can control what buckets, requests, users, or groups are allowed through a specific VPC endpoint.

Amazon VPC endpoints for Amazon S3 is available in the US Standard, US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney) regions.

On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots. Encryption and decryption are handled transparently so you don’t have to modify your application to access your data. For more information about the use of AWS Key Management Service with Amazon RDS, see the Amazon RDS User's Guide. To learn more about AWS KMS, visit the AWS KMS overview page.

AWS GovCloud (US) is an AWS region designed to allow U.S. government agencies at the federal, state and local level, along with contractors, educational institutions, enterprises and other U.S. customers to run regulated workloads in the cloud by addressing their specific regulatory and compliance requirements. Beyond the assurance programs applicable to all AWS regions, the AWS GovCloud (US) region allows you to adhere to U.S. International Traffic in Arms Regulations (ITAR) regulations, the Federal Risk and Authorization Management Program (FedRAMPSM) requirements and the Department of Defense (DoD) Cloud Security Model (CSM) Levels 3-5.

AWS CodeDeploy is now available in the EU (Ireland) and Asia Pacific (Sydney) AWS regions.

AWS CodeDeploy is a service that automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. You can use AWS CodeDeploy to automate deployments, eliminating the need for error-prone manual operations, and the service scales with your infrastructure so you can easily deploy to one instance or thousands.

AWS CodeDeploy is also currently available in the US East (N. Virginia) and US West (Oregon) AWS regions. Please visit our product page for more information.

Amazon SES now supports CloudTrail logging. CloudTrail is a service that can capture a subset of the API calls that you make from the Amazon SES console or from the Amazon SES API and deliver the log files to an Amazon S3 bucket that you specify. All Amazon SES APIs except for the email-sending APIs (SendEmail and SendRawEmail) are supported. Using the information collected by CloudTrail, you can determine what request was made to Amazon SES, the source IP address from which the request was made, who made the request, when it was made, and so on.

The AWS Key Management Service (KMS) is now available in the AWS GovCloud (US) region. KMS is a service that makes it easy for you to create and control the encryption keys used to encrypt your data and uses Hardware Security Modules (HSMs) to protect the security of your keys. This capability is a critical requirement for running regulated workloads in the cloud.

With the availability of KMS, you are now able to encrypt data in your own applications and within the following AWS services using keys under your control:

Amazon EBS volumes

Amazon S3 objects using Server Side Encryption (SSE-KMS) and client-side encryption using the S3 encryption client for the AWS SDKs

Output from your Amazon EMR cluster to Amazon S3 using the EMRFS client

In addition, AWS KMS is integrated with AWS CloudTrail to provide you with centralized logging of all key usage to help meet your regulatory and compliance needs.

AWS GovCloud (US) is an AWS region designed to allow U.S. government agencies at the federal, state and local level, along with contractors, educational institutions, enterprises and other U.S. customers to run regulated workloads in the cloud by addressing their specific regulatory and compliance requirements. Beyond the assurance programs applicable to all AWS regions, the AWS GovCloud (US) region allows you to adhere to U.S. International Traffic in Arms Regulations (ITAR) regulations, the Federal Risk and Authorization Management Program (FedRAMPSM) requirements and the Department of Defense (DoD) Cloud Security Model (CSM) Levels 3-5.

With Amazon Cognito, you can easily onboard users using public login providers such as Amazon, Facebook, Google+, or any OpenID Connect compatible services. We are now adding support for Twitter and Digits – making it fast and easy for you to authenticate users with Twitter and Digits, without having to manage passwords or operate backends to map user accounts to their public authentication providers. With Digits integration, you can provide an even simpler onboarding experience for your users by enabling them to sign in using their phone number.

To learn how to use Twitter and Digits as login providers for your app, visit our blog and documentation.

You can check out our new developer guide to get started with Amazon Cognito.

AWS Elastic Beanstalk now supports PHP 5.6, and Multi-container Docker environments are now available in the Asia Pacific (Sydney) AWS region. You can now deploy your applications relying on these languages/frameworks on Elastic Beanstalk in all available service regions using the AWS Management Console and the EB CLI v3.

Quick Starts are automated reference deployments for key enterprise workloads on the Amazon Web Services (AWS) cloud. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

This new Quick Start reference deployment covers the implementation of SAP Business One, version for SAP HANA, on the AWS cloud, using AWS services and best practices. SAP Business One, version for SAP HANA, is an integrated enterprise resource planning (ERP) solution designed for dynamically growing small and midsize businesses. This solution is powered by SAP HANA, which is SAP’s in-memory database management system, and provides faster and more predictable performance.

The AWS cloud provides a suite of infrastructure services that enable you to deploy SAP Business One, version for SAP HANA, in an easy and affordable way. This Quick Start builds on the SAP HANA on AWS Quick Start, which has already been used successfully by many AWS customers and partners worldwide.

The Quick Start includes AWS CloudFormation templates that help you deploy SAP Business One, version for SAP HANA, either into a new Amazon Virtual Private Cloud (Amazon VPC) or into an existing Amazon VPC in your AWS account. The deployment guide provides step-by-step instructions for planning, configuring, and deploying SAP Business One, version for SAP HANA.

Co-ownership of files and folders is now possible. This means that a document or folder can be owned by more than one person. Users can enable different permissions at different levels of the folder hierarchy. Within a folder that you have been designated a co-owner, you will be able to re-share, rename and delete documents and folders. For example, you can share ownership of any of your files and folders with a colleague, who would then be able to add more contributors for additional feedback or edits.

In addition, users can now share documents and folders with groups by selecting an Active Directory group. WorkDocs will check the permissions of the user against an Active Directory to maintain a high level of security. Sharing with the group will share the document or folder with all of the members inside the group.

Lastly, you can now share URL links to folders in addition to sharing URL links to files.

You can now run and test Docker-enabled applications on your local machine using the AWS Elastic Beanstalk CLI (EB CLI). This new feature (eb local) streamlines your development and testing and allows you to run and test containers used for Generic, Preconfigured, and Multi-container Docker environments directly on your local machine. Previously, you would have to type in all the option flags (e.g., ports, log volumes) specified in your Dockerrun.aws.json file in order to run your containers locally the same way as it would deploy on Elastic Beanstalk. You would also have to manually launch multiple containers to test multi-container Docker applications. Now, you just use the ‘eb local’ command and the EB CLI will run your containers locally using the same configurations specified in your Dockerrun.aws.json. You can also get the status of your containers, print the location of your logs, and open your application URL in your browser directly from the CLI.

Amazon EC2 Container Service (ECS) is now available in the Asia Pacific (Sydney) region.

Amazon EC2 Container Service is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles. You can use Amazon ECS to schedule the placement of containers across your cluster based on your resource needs and availability requirements. You can also integrate your own scheduler or third-party schedulers to meet business or application specific requirements.

Amazon EC2 Container Service is currently available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo), and Asia Pacific (Sydney) regions. Please visit our product page for more information.

Amazon Glacier now supports a new way to manage access to your individual Glacier vaults. You can now define an access policy directly on a vault, making it easier to grant vault access to users and business groups internal to your organization, as well as to your external business partners.

Previously, you have been able to assign AWS Identity and Access Management (IAM) policies to IAM users or groups to control the read, write, and delete permissions on your Glacier vaults. Now, with vault access policies, you can define a single access policy on a vault to govern access to all users. For example, to protect information in a business-critical vault from unintended deletion, you can create a vault access policy that denies delete attempts from all users. This data protection procedure can be accomplished in a matter of minutes in the AWS Management Console without having to audit and revoke delete permissions assigned to users through IAM policies.

Vault access policies also make it easier to grant cross-account access. For instance, you can grant read-only access on a vault to a business partner in a different AWS account by simply adding that account to the vault’s access policy.

Amazon DynamoDB supports document (e.g. JSON) and key-value data model. Expressions for conditions simplifies reading and writing JSON documents on Amazon DynamoDB. Today, we are extending expressions support to key conditions in the Query operation for even simpler queries. You can also add or edit JSON documents directly from the DynamoDB console. To learn more, please read Jeff Barr’s blog. You can also read more about expressions for key conditions on our documentation page. If you are new to document model support (JSON) on DynamoDB, get started by reading this blog. If you have any questions or feedback about document model support, email us.

AWS Identity and Access Management (IAM) now reports the time stamp when access keys, for an IAM user or root account, were last used along with the region and the AWS service that was accessed. These details complement password last used data to provide a more thorough picture of when an IAM user or root account was last active, enabling you to rotate old keys and remove inactive users with greater confidence. You can view access key last used data interactively in the IAM console, programmatically via the API/CLI/SDK, or in the contents of an IAM credential report.

We are pleased to announce that Amazon Web Services has opened an office in the Netherlands to help support the growth of the Amazon Web Services (AWS) cloud and its rapidly expanding customer base in the country. The office is now open and operational in The Hague and is supporting businesses of all sizes, from start-ups to Europe's oldest and most established enterprises, as they make the transition to the AWS cloud.

We are pleased to announce that Amazon Web Services has opened an office in Johannesburg to support the growth of the cloud computing business and its rapidly expanding customer base in the country. The office is now open and operational, and is supporting businesses of all sizes from start-ups to the country’s oldest and most established enterprises and public sector organizations.

This news comes as Amazon celebrates over ten years in South Africa. Amazon first established a presence in the country by opening a development center in December 2004 to help build the Amazon Elastic Compute Cloud (Amazon EC2) service and has also been the location for the engineering of many pioneering networking technologies and next generation software.

The AWS Management Console now supports the Japanese & Simplified Chinese languages for 9 additional services: EC2, Auto Scaling, VPC, S3, CloudWatch, RDS, DynamoDB, IAM, and EMR. Customers who prefer to interact with the AWS Management Console in these languages can do so using a new Language Selector in the Management Console footer.

AWS has entered into a research agreement with the US National Oceanic and Atmospheric Administration (NOAA) to explore sustainable models for increasing the amount of open NOAA data that is made available via the cloud. Under the terms of the new research agreement, AWS and our collaborators will look for ways to push more of NOAA’s data to the cloud, with a focus on spurring innovation and building a healthy and vibrant ecosystem around the data. The data NOAA already makes available to the public drives critical research efforts and multi-billion dollar industries. We anticipate that making more of NOAA’s data widely available will drive even more economic value and social good.

We are pleased to announce a new G2 Instance size, the g2.8xlarge. The new instance is backed by four high-performance NVIDIA GPUs, making it well suited for GPU compute workloads including large scale rendering, machine learning, transcoding, and other server-side workloads that require massive parallel processing power. With this new instance size, you can build high-performance CUDA, OpenCL, DirectX, and OpenGL applications and services without making expensive up-front capital investments.

You can launch G2 instances using the AWS console, Amazon EC2 command line interface, AWS SDKs and third party libraries. The new G2 instance is initially available in the US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) AWS regions. In addition to On-Demand Instances, customers can also purchase G2 instances as Reserved and Spot Instances. To learn more about G2 instances, visit the Amazon EC2 instances page. To get started immediately, visit the AWS Marketplace for GPU machine images from NVIDIA and other Marketplace sellers.

We are thrilled to announce a new AWS Direct Connect location in Beijing, China at the Sinnet JiuXianqiao IDC facility. This is the first Direct Connect location supporting our China (Beijing) region, and brings the total number of locations to 15.

This new Quick Start reference deployment covers the implementation of MongoDB in a highly available architecture on the AWS cloud, using AWS services and best practices. MongoDB is an open-source, NoSQL database that provides support for JSON-styled, document-oriented storage systems. It supports a flexible data model that enables you to store data of any structure, and provides a rich set of features, including full index support, sharding, and replication.

The MongoDB Quick Start includes an automated AWS CloudFormation template, which launches and runs a multi-node MongoDB cluster on AWS. You can launch the MongoDB cluster either into an existing Amazon Virtual Private Cloud (Amazon VPC) or into a newly created Amazon VPC. Customization options include the MongoDB version, the number of replica sets, the number of sharded clusters, the number of microshards per node, and storage types and sizes. The deployment guide discusses the MongoDB architecture and implementation in detail.

The MongoDB Quick Start takes approximately 15 minutes to deploy. You pay only for the AWS compute and storage resources you use—there is no additional cost for running the Quick Start.

This Quick Start is the latest in a series of reference deployments that describe in words and diagrams the architectures for implementing popular enterprise solutions on AWS. They also include the AWS CloudFormation templates that automate the deployments.

APIs are now supported for creating and managing Amazon WorkSpaces. You can programmatically manage WorkSpaces via the AWS CLI and SDKs. Download the latest AWS SDK or CLI or learn more about the APIs in our developer guide.

Actions performed on WorkSpaces via the AWS SDK and CLI can be recorded via AWS CloudTrail. Also, permissions for these actions and the WorkSpaces resources on which they act can be controlled via an AWS IAM policy.

AWS Mobile SDKs for iOS and Android now support AWS Lambda. You can simply invoke AWS Lambda functions from the SDK, and create scalable and secure custom backends – enabling you to build mobile apps without having to provision or manage infrastructure. To learn more, visit the AWS Mobile SDK page.

You can now invoke AWS Lambda functions by publishing notifications in Amazon SNS. This makes it easy to customize notifications before sending them to mobile devices or other destinations. Applications and AWS services that already send SNS notifications, such as Amazon CloudWatch, can now integrate with AWS Lambda through push notifications, without provisioning or managing infrastructure. You can also use SNS notification delivery to an AWS Lambda function as a way to publish to other AWS services such as Amazon Kinesis or Amazon S3. To learn more, visit our documentation.

You can now purchase applications for Amazon WorkSpaces from a new AWS Marketplace category called AWS Marketplace for Desktop Apps. Choose from a broad selection of more than 100 applications in eleven categories, including Security, Productivity and Collaboration, Business Intelligence, and Illustration and Design. You can purchase software quickly from within the Amazon WorkSpaces console and pay as you go on a monthly basis.

Amazon EC2 Container Service (ECS) is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications, query the complete state of your cluster, and access many familiar features like Elastic Load Balancing, EBS volumes, security groups, and IAM roles.

AWS Lambda is now generally available for production use and is introducing new features that make it even easier to build mobile, tablet, and IoT backends that scale automatically without provisioning or managing infrastructure.

Amazon Machine Learning is a new service that makes it easy for developers of all skill levels to use machine learning technology. Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. Once your models are ready, Amazon Machine Learning makes it easy to get predictions for your application using simple APIs, without having to implement custom prediction generation code, or manage any infrastructure.

Amazon Elastic File System (Amazon EFS) is a file storage service for Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.

The Amazon Cognito Events feature allows you to run AWS Lambda functions from Amazon Cognito. You can now invoke actions with changes in user data, such as app preferences or game state – and then validate, audit, or modify that data on the Lambda backend, without provisioning or managing any backend infrastructure. To learn more, visit our blog.

You can now use AWS Lambda-backed custom resources for your AWS CloudFormation stacks. CloudFormation simplifies the provisioning of a wide range of AWS resources, and also supports ‘custom resources’, which is an extensibility mechanism that enables you to write custom provisioning logic and have it execute during a CloudFormation stack operation (e.g., creating a stack). You can write custom provisioning logic for tasks such as provisioning a third party resource or looking up the latest AMI IDs for use in your stacks.

With Lambda-backed custom resources, you can now use AWS Lambda to write the custom provisioning logic and have it called during CloudFormation stack creates, updates, or deletes. This greatly simplifies implementing and using custom resources for use cases such as dynamically looking up AMI IDs, referencing resources in other stacks, or implementing utility functions (e.g., an IP address reversal function).

Starting today, you can search and sort your AMIs by Creation Date. AMI Creation Date serves as an additional attribute to help you identify the AMI(s) you're using to launch new instances or to manage your AMI repository. Using the new attribute, you can sort AMIs based on their creation date in ascending or descending order, or search for AMIs created before or after a particular date that you define.

AMI Creation Date is now available on the AWS Management Console. You can find this attribute by going to the EC2 console and choosing AMIs on the left navigation menu. You should see a new column on the AMIs page called "Creation Date". To sort your AMIs by creation date, simply click on the column header, or use the search box to filter your AMI results by date.

The latest version (v1.3) of the AWS Elastic Beanstalk platforms use Amazon Linux AMI 2015.03. Please follow this guide to upgrade your environment to the latest version. New features of this release include:

You now have the ability to deactivate a running pipeline and activate it later at a time of your choosing, in AWS Data Pipeline.

This is useful in several situations:

Pausing a pipeline while you makes edits to the pipeline definition such as an Amazon RDS username/password change.

Pausing a pipeline while you makes edits or fixes to Hive Activity scripts used by Data Pipeline.

Pausing your pipeline for other scheduled system updates or maintenance.

If your pipeline processed data that is discovered as corrupted, you can reprocess the data by resuming the pipeline for a date before the data corruption and backfill the data.

You can easily control deactivation and activation of your pipelines through the console, or through the API, SDK, or CLI. Learn more about pausing your pipeline through Deactivation/Activation in the AWS Data Pipeline documentation.

Additionally, you can now take advantage of enhanced editing capabilities on your scheduled pipeline, such as changing scheduled end dates, adding SNS alarms, and editing all fields on existing objects in your pipeline.

We have released more information about our AWS Recertification policy and how to recertify at the AWS Certified Solutions Architect – Associate level. Recertification is the process of renewing your certification credential to demonstrate your continued learning and expertise in working with AWS. AWS Certification credentials are valid for a period of 2 years. With more than 516 new features and services releases in 2014 alone, it is important that certified individuals keep skills and knowledge current by updating their certification, or “recertifying”, every 2 years. Recertification is now required for individuals who hold AWS Certified Solutions Architect – Associate. Recertification can be maintained by either passing a recertification exam or by advancing to the professional level. Learn more at AWS Recertification.

We have recently made several updates to our AWS Training offerings to broaden our course portfolio and better address the training needs of our customers.

AWS Business Essentials is a new workshop for IT business decision makers. It’s designed to help you understand the benefits of cloud computing and how a cloud strategy can help meet your business objectives. We also address financial benefits of AWS and examples of successful cloud adoption frameworks. AWS Business Essentials complements AWS Technical Essentials, formerly AWS Essentials, which introduces technical end users to core AWS services for compute, storage, database and networking.

Advanced Architecting on AWS – this new course, which replaces Architecting on AWS – Advanced Concepts, builds on concepts covered in Architecting on AWS. We cover how to build solutions incorporating data services, governance, and security on AWS, as well as discuss specialized AWS services, such as AWS Direct Connect and AWS Storage Gateway, which support Hybrid architecture. This new course is also better aligned in helping you prepare for the AWS Certified Solutions Architect – Professional exam.

DevOps Engineering on AWS – this new course covers core principles of the DevOps methodology and common DevOps patterns to develop, deploy, and maintain applications on AWS. We also cover the core principles of Continuous Integration and Continuous Deployment This course replaces “Advanced Operations on AWS” and helps you prepare for the AWS Certified DevOps Engineer – Professional exam.

AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. Today, AWS Config is available in nine regions

Today, AWS Config and Logstorage, developed by Infoscience Corporation, Japan, is offering an integrated solution that allows AWS customers in Asia Pacific (Tokyo) region to capture their AWS Config data and manage them from a browser. Customers can create various complex searches, analyze statistics, set up alerts and reporting on AWS Config data. For example, customers can search which instances are using approved AMIs, or which security groups were recently updated. Customers also get a visual report of the configuration snapshot delivered by AWS Config. Lastly, customers can integrate their AWS Config information with AWS CloudTrail, giving them visibility into who initiated API calls, from what IP address, etc. to result in state captured by AWS Config. Logstorage is not limited to collecting AWS Config data but can also handle textual logs in any format (eg. AWS Cloudtrail, Amazon CloudWatch Logs).

You can now use version 4.0.2 of the MapR Distribution including Apache Hadoop in Amazon EMR. All editions of this MapR release are supported: Community Edition (M3), Enterprise Edition (M5), and Enterprise Database Edition (M7). These versions are the first MapR releases on Amazon EMR that include Apache Hadoop 2 and YARN for resource management. Version 4.0.2 of the MapR Distribution also contains updated releases of MapR-FS and MapR-DB in the M3 and M7 Editions, and continues to offer the MapR high availability and enhanced disaster recovery feature set in the M5 and M7 Editions. To learn more about using Amazon EMR with the MapR Distribution for Hadoop, click here.

Version 4.0.2 of the MapR Distribution is only supported with Amazon EMR AMI version 3.3.2. You can launch an Amazon EMR cluster with MapR version 4.0.2 directly from the AWS Management Console, AWS CLI, or the Amazon EMR API. To view pricing information, click here. To learn more about how to launch an Amazon EMR cluster with the MapR Distribution for Hadoop, click here.

Today, we are announcing support for Symantec Backup Exec 15 with AWS Storage Gateway-Virtual Tape Library (VTL). You can now backup and archive directly to scalable, cost-effective, secure Amazon S3 and Amazon Glacier storage using Gateway-VTL with Backup Exec 15. For the complete list of supported backup applications and to get started, see the AWS Storage Gateway-VTL User Guide.

You can now launch Amazon EMR clusters on the next generation of Amazon EC2 Dense-storage (D2) instances. D2 instances allow you to take advantage of the low cost, high disk throughput and high sequential I/O access rates offered by these instances. Workloads that rely on data stored in the Hadoop Distributed File System (HDFS) will benefit from the low cost per terabyte offered by D2 instances.

Amazon EMR allows you to use both HDFS, installed on instance storage, and Amazon S3 as your primary data store. Amazon EMR clusters have pre-configured defaults set for Hadoop 1.x and 2.x, to optimize performance for most workloads. You can also change these defaults using bootstrap actions.

D2 instances are available in four sizes with 6TB, 12TB, 24TB, and 48TB storage options. To learn more about the D2 instances, please visit the Amazon EC2 Instance page. Amazon EMR supports D2 instances in all regions where they are available: US East (Northern Virginia), US West (Oregon), EU (Ireland, Frankfurt), and Asia Pacific (Tokyo, Singapore, Sydney) regions. For Amazon EMR pricing of D2 instances, visit the Amazon EMR Pricing page.

You now have the ability in the AWS Command Line Interface (CLI) to set values in the AWS CLI configuration file for several Amazon EMR commands. You can set these parameters once, instead of specifying them with each command. In the Amazon EMR create-cluster command, you can now set variables for the Amazon EMR service role, Amazon EC2 instance profile, SSH key name, enable debugging, and Log URI bucket in Amazon S3. You can also set the variable for local key pair location in these Amazon EMR commands: SSH, get, put, and socks.

Using the Amazon EMR create-default-roles command, you can easily create and use default AWS Identity and Access Management (IAM) roles for your cluster’s Amazon EMR service role and EC2 instance profile. We have enhanced this command to automatically set the default Amazon EMR service role and EC2 instance profile values for their respective variables in the create-cluster command, so you do not need to specify them manually after they are created. To learn more about setting configuration variables for the Amazon EMR commands on the AWS CLI, click here.

You can now use Amazon Elastic Transcoder to apply Microsoft PlayReady DRM protection to your Smooth Streaming and HLS outputs. When you create your transcoding job, simply include the encryption key, the ID of the key on the license server, and license server URL provided by your PlayReady License Provider. Elastic Transcoder transcodes and packages your files in one simple step.

There are no additional Elastic Transcoder charges for using this new DRM packaging feature. To learn more, please consult the Securing Your Content chapter in the Elastic Transcoder Developer Guide.

You can now use AWS CodeDeploy with on-premises instances. CodeDeploy is a service that automates code deployments to instances. CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. Click here to learn more about CodeDeploy

Previously, you could only deploy code to Amazon EC2 instances. Now, you can use CodeDeploy to consistently deploy your application across your development, test, and production environments on any instance including instances running in your own data centers (your instances will need to have the CodeDeploy agent installed and be able to connect to AWS public endpoints).

You pay $0.02 per on-premises instance update using AWS CodeDeploy; there are no minimum fees and no upfront commitments. Click here for full pricing details.

Data Redaction is designed to allow you to mask sensitive data fields such as credit-card numbers and personally identifiable information (like U.S. Social Security numbers). Adaptive Query Optimization allows the query optimizer to adjust execution plans based on statistics gathered at statement execution time, improving query performance. You can now define stored procedures and functions in the WITH clause and use these inline objects in your queries. You can also limit the results of your queries to the top N records more easily.

To create a new Oracle 12c database instance with just a few clicks in the AWS Management Console, use the "Launch DB Instance" wizard and select the DB engine version "12.1.0.1.v1". This DB engine version supports APEX version 4.2.6.

Amazon RDS for SQL Server and Oracle now joins Amazon RDS for MySQL and PostgreSQL databases in allowing you to encrypt your databases using keys you manage through AWS Key Management Service (KMS). On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots. Encryption and decryption are handled transparently so you don’t have to modify your application to access your data. Amazon RDS encryption will work concurrently with Oracle and SQL Server’s Transparent Data Encryption (TDE) and running Amazon RDS encryption will not affect TDE. When you create a new SQL Server and Oracle database instance, you can choose to enable encryption via the AWS Management Console or API. You may use the default RDS key automatically created in your account, or use a key you created using KMS, to encrypt your data. For more information about the use of AWS Key Management Service with Amazon RDS, see the Amazon RDS User's Guide. To learn more about AWS KMS, visit the AWS KMS overview page.

We are pleased to announce Single Sign-On capabilities between Amazon WorkDocs and Amazon WorkSpace, in addition to auto session resume for Amazon WorkSpaces.

Single Sign-On (SSO) capabilities are now available for Amazon WorkSpaces and Amazon WorkDocs. With SSO, when you are signed in to your Amazon WorkSpace you will automatically be signed in to your Amazon WorkDocs sync client and web client when you access them. Additionally, if you are connecting from a device on the same domain as your Amazon WorkDocs subscription you will automatically be signed in when you access the Amazon WorkDocs sync client and web client.

The AWS Console mobile app now supports AWS CloudFormation stacks and lets users edit the min, max, and desired capacities of their Auto Scaling groups. Download the app from Amazon Appstore, Google Play, and iTunes to view your resources on the go.

AWS CloudFormation customers can use the app to view stack status and overview. To view your stacks, select CloudFormation Stacks after you sign in to the app. You can browse CloudFormation stacks and view status, overview, output, tags, parameters, events, and resources for each stack.

Auto Scaling customers can now change the number of instances in an existing Auto Scaling group by editing the min, max, and desired capacity configurations for a group.

The app features support for EC2, S3, Route 53, ELB, RDS, AWS Elastic Beanstalk, CloudFormation, DynamoDB, Auto Scaling, AWS OpsWorks, CloudWatch, and the Service Health Dashboard. The app lets you authenticate several identities, so you can easily switch between multiple accounts.

Let us know how you use the AWS Console app and tell us what features you’d like to see by using the feedback link in the app’s menu.

You can now launch D2 instances, the latest generation of Amazon EC2 Dense-storage instances. D2 instances are designed for workloads that require high sequential read and write access to very large data sets, such as Hadoop distributed computing, massively parallel processing data warehousing, and log processing applications.

You can now use AWS OpsWorks with Amazon EBS Provisioned IOPS volumes that can store up to 16TB and process up to 20,000 input/output operations per second (IOPS) or Amazon EBS General Purpose (SSD) volumes that can store up to 16TB and process up to 10,000 IOPS.

These performance improvements make it even easier to run applications requiring high performance or high amounts of storage, such as large transactional databases, big data analytics, and log processing systems. Now you can run large-scale, high performance workloads on a single volume, without needing to stripe together several smaller volumes.

You can now perform in-place platform version updates to your AWS Elastic Beanstalk environments. For example, you can update an environment running version 1.0.0 of the Ruby 2.0 platform to version 1.2.1 of the Ruby 2.0 platform without having to create a clone of the environment. You click the “Change” button on the environment dashboard and select the new version of the platform using the AWS Management Console, or you can use the “eb upgrade” command using the EB CLI. Read more here.

You can also use the Elastic Load Balancing health check to determine when to continue on to the next batch of instances while performing rolling updates for configurations and platforms. Read the documentation here.

You can now abort platform, configuration, and application updates before they are complete. For example, you can abort a deployment for a bad application version that takes too long to deploy. Learn more about this feature.

AWS Config now allows you to choose turning on Amazon SNS notifications for changes to your AWS resources. In addition, when notifications are subscribed to email, you also receive these notifications in an email-friendly format, allowing you to filter relevant messages.

Resource Groups and Tag Editor now support the ability to search for resources by tag substring. AWS customers use tags to find and organize resources on a single console page using Resource Groups. The new substring search feature lets customers use Resource Groups when their environment uses a single tag key for multiple distinct values, such as “FooDB-Prod-01-jdoe”. For example, create a resource group for a production environment that displays resources with the substring “Prod” anywhere in the “Name” tag.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

You can now use Amazon EMR clusters to process encrypted data stored in Amazon S3 that you previously encrypted using client-side encryption. This functionality has been added to the EMR File System (EMRFS), which Amazon EMR clusters use to read from and write to Amazon S3 securely, consistently, and with high performance. When writing to Amazon S3, EMRFS now supports encrypting those objects with Amazon S3 client-side encryption in addition to Amazon S3 server-side encryption

You can now learn what devices your end users use to access content Amazon CloudFront is delivering. The new Devices Report shows you how many requests come from mobile, tablets, desktops, smart TVs and other types of devices during a specified time period. You can access the new Devices Report via the CloudFront section of the AWS Management Console by selecting Viewers under the Reports and Analytics section in the navigation pane and then clicking the Devices tab.

You can now launch C4 instances, the latest generation of Amazon EC2 Compute-optimized instances, in the EU (Frankfurt) AWS region. C4 instances are ideal for compute-bound workloads, such as high-traffic front-end fleets, MMO gaming, media processing, transcoding, and High Performance Computing (HPC) applications.

C4 instances are available in five sizes, offering up to 36 vCPUs. C4 instances are based on Intel Xeon E5-2666 v3 (codename Haswell) processors that run at a base frequency of 2.9 GHz, and can deliver clock speeds as high as 3.5 GHz with Intel ® Turbo Boost. Each C4 instance type is EBS-optimized by default and at no additional cost. This feature provides 500 Mbps to 4,000 Mbps of dedicated throughput to EBS above and beyond the general purpose network throughput provided to the instance. C4 instances also provide Enhanced Networking for higher packet per second (PPS) performance, lower network jitter, and lower network latencies.

You can now deploy and manage multiple Docker containers in a single AWS Elastic Beanstalk environment. We introduced Docker support on Elastic Beanstalk in April 2014 and have since then introduced many different Docker based Elastic Beanstalk environments (i.e., Go, GlassFish, Python).

With this new feature, AWS Elastic Beanstalk will help you provision an Amazon EC2 Container Service cluster along with the rest of your infrastructure stack (e.g., Elastic Load Balancing, CloudWatch, etc). You can create a Dockerrun.aws.json file (specifies which container images are to be deployed, CPU and memory requirements, port mappings, and container links), upload the file, and Elastic Beanstalk will schedule your containers across a cluster of Amazon EC2 instances and also monitor the health of each container. This feature will allow you to easily deploy and scale containerized web applications and avoid the complexities of provisioning the underlying infrastructure.

AWS Elastic Beanstalk now supports Docker 1.5, Ruby 2.2, and Node.js 0.12.0. You can now deploy your applications relying on these languages/frameworks on Elastic Beanstalk in all service regions using the AWS Management Console and the EB CLI v3.

Amazon Elastic Transcoder has recently made changes to allow your jobs to run even faster. We have also made the job timing data and other useful information available to view in the job metadata. New timestamp information allows you to see how long each job spent queuing and transcoding. New fields on the job object and console display the input video resolution, duration, file size, and frame rate.

Amazon S3 now supports cross-region replication, a new feature that automatically replicates data across AWS regions. With cross-region replication, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region that you choose. For example, you can use cross-region replication to provide lower-latency data access in different geographic regions. Cross-region replication can also help if you have a compliance requirement to store copies of data hundreds of miles apart. There is no additional charge for using cross-region replication. You pay Amazon S3’s usual charges for storage, requests, and inter-region data transfer for the replicated copy of data.

Cross-region replication is available in the US Standard, US West (Oregon), US West (N. California), EU (Ireland), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (Sao Paulo) regions.

AWS Marketplace is an online store that helps customers find, buy, and immediately start using the software and services they need to build products and run their businesses. We are pleased to announce that more than 700 AWS Marketplace software products can now be launched in the Frankfurt (FRA) region.

With both hourly and annually priced options, customers can elect to pay-by-the-use or purchase annual subscriptions for steady state workloads. Monthly priced listings will be available in Frankfurt by May 1. With this release, customers can utilize Marketplace software in the FRA region, and in the process, benefit from improved performance.

As of today, you can access over 85,000 Landsat 8 scenes on through our newest Public Data Set Landsat on AWS. The scenes are all available in the landsat-pds bucket the Amazon S3 US West (Oregon) region. By making Landsat data readily available near our flexible computing resources, we hope to accelerate innovation in climate research, humanitarian relief, and disaster preparedness efforts around the world. Because the imagery is available on AWS, researchers and software developers can use any of our on-demand services to perform analysis and create new products without needing to worry about storage or bandwidth costs.

Starting today, you can create Amazon EBS Provisioned IOPS volumes that can store up to 16 TB, and process up to 20,000 input/output operations per second (IOPS). You can also create Amazon EBS General Purpose (SSD) volumes that can store up to 16 TB, and process up to 10,000 IOPS. These volumes are designed for five 9s of availability and up to 320 megabytes per second of throughput when attached to EBS optimized instances.

These performance improvements make it even easier to run applications requiring high performance or high amounts of storage, such as large transactional databases, big data analytics, and log processing systems. Now you can run large-scale, high performance workloads on a single volume, without needing to stripe together several smaller volumes.

Larger and faster volumes are available now in all commercial AWS regions and in AWS GovCloud (US). To learn more please see the Amazon EBS details page.

You can now use Amazon Mobile Analytics for your JavaScript enabled apps. The newly released Amazon Mobile Analytics SDK for JavaScript supports most JavaScript based frameworks such as PhoneGap, Appcelerator, and Intel XDK.

Amazon WorkDocs is now available in the Asia Pacific (Singapore) AWS region. This additional location will reduce latency for many users, and provide more flexibility in complying with regulatory requirements that govern where data must be stored.

Amazon RDS for PostgreSQL now supports major version PostgreSQL 9.4.1. PostgreSQL 9.4.1 includes support for the JSONB datatype, which allows you to manage schemas more flexibly. JSONB items are stored in a decomposed binary format that speeds query operations. You can launch a new Amazon RDS database instance running PostgreSQL 9.4.1 with just a few clicks in the AWS Management Console. Learn more about what’s new in PostgreSQL 9.4. To learn more about PostgreSQL 9.4.1, read the PostgreSQL 9.4.1 release notes. To learn more about Amazon RDS for PostgreSQL, see the Amazon RDS User’s Guide.

You can now use Amazon Elastic Transcoder to create professional content that meets NTSC or PAL standards, package videos in an FLV or MPG container, and create animated GIF outputs. To support these new outputs, we have added additional options for color space conversions, MPEG-2 video, MP2 audio, and interlacing. Trying out these new features is as easy as selecting one of the new system presets when creating a transcoding job.

We are excited to announce support for looking up API activity in CloudTrail. Using the CloudTrail console, AWS SDKs, or AWS CLI, you can look up API activity related to creating, deleting, and updating AWS resources in your account. You can use this feature to troubleshoot operational issues or security incidents and take immediate actions such as following up with the user or open a trouble ticket to do deeper analysis.

This feature is available immediately in the following regions: US East (Northern Virginia), US West (Oregon), US West (Northern California), Europe (Ireland), Asia Pacific (Sydney), Asia Pacific (Singapore), Europe (Germany), Asia Pacific (Tokyo), and South America (Brazil). You can look up API activity that was made to create, delete, or update AWS resources in your AWS account. You can look up API activity that was captured for your account in the last 7 days for 28 AWS services including Amazon EC2, Amazon RDS, Amazon EEBS, Amazon VPC, and AWS Identity and Access Management (IAM). For a list of supported services, refer the CloudTrail documentation.

If you have already turned on CloudTrail for your account, you do not need to take any other action. Simply go to the CloudTrail console, and the API activity related to creating, deleting, and updating AWS resources will be automatically available to you. If you haven’t turned on CloudTrail for your AWS account, turn it on now from the CloudTrail console. Once you login to the CloudTrail console, you will see the API activity history arranged in reverse chronological order with the most recent events listed at the top. You can filter the API activity to troubleshoot operational issues or security incidents. The five filters supported are: Time range, Event name, User name, Resource type, and Resource name. You can drill down into each event and review it in detail or navigate to a specific AWS service console and view additional details about a resource referenced in the event.

There are no additional charges for looking up API activity. For more information about this feature, go to CloudTrail documentation.

Amazon CloudFront now gives you a new way to secure your private content: CloudFront signed HTTP cookies. In the past, you could control who is able to access your CloudFront content by adding a custom signature to each object URL. Now you can get that same degree of control by including the signature in an HTTP cookie instead. This lets you restrict access to multiple objects (e.g., whole site authentication) or to a single object without needing to change URLs.

Signed HTTP cookies make it easy to restrict viewer access to your streaming media content. For example, if your media content is in HTTP Live Streaming (HLS) format, you can use Amazon Elastic Transcoder or your media server to generate the playlist and media segments. You then write your web application to authenticate each user and to send a Set-Cookie header that sets a cookie on the user's device. When a user requests a restricted object, the browser forwards the signed cookie in the request, and CloudFront checks the cookie attributes to determine whether to allow or restrict access to the HLS stream. CloudFront checks for this cookie when the player requests the playlist and when the player requests each segment, which ensures that the end-to-end stream is secured.

There are no additional charges for using private content with Amazon CloudFront. To learn more, see the Amazon CloudFront Developer Guide. We will also be showing a demo of this functionality in our next CloudFront office hours on Thursday, March 26th. You can sign-up for this office hours session here.

We have introduced two new capabilities in the AWS Identity and Access Management (IAM) console that makes it easier for you to author your IAM policies. To help you find and correct errors in your policies, we added a new Validate Policy button to the IAM console. This button returns a detailed JSON syntax error message that refers to the line number where the error occurs. In addition, the console now gives you the option to use JSON formatting for your IAM policies to make them easier to read and understand.

We are pleased to announce that ElastiCache for Redis now supports engine version 2.8.19. Customers can now launch new clusters with Redis 2.8.19, as well as upgrade existing ones to the new engine version.

To learn more, we encourage you to read Jeff Barr’s blog. For the full list of improvements in Redis 2.8.19 click here. You can easily launch an ElastiCache for Redis cluster with engine version 2.8.19 via a few clicks on the AWS Management Console.

You can now run and manage Ruby 2.2 applications with AWS OpsWorks. Ruby 2.2 includes a few performance improvements such as updates to the Garbage Collector. Read the release notes here.

Ruby 2.2 will now appear as a choice for your new Rails App Server and Custom layers. For existing layers, you can select Ruby 2.2 on the settings page of the layer and your instances will be updated on the next deployment. Please see the documentation for this feature to learn more.

You can now export EC2 instances you previously imported using VM Import in the AWS GovCloud (US) region. To export an instance, you can use the ec2-create-instance-export-task command. The export command captures the parameters necessary (instance ID, S3 bucket to hold the exported image, name of the exported image, VMDK, OVA or VHD format) to properly export the instance to your chosen format. The exported file is saved in an S3 bucket that you previously created.

AWS GovCloud (US) is an isolated AWS Region designed to allow US government agencies and customers to move sensitive workloads into the cloud by addressing their specific regulatory and compliance requirements. The AWS GovCloud (US) framework adheres to U.S. International Traffic in Arms Regulations (ITAR) regulations as well as the Federal Risk and Authorization Management Program (FedRAMPSM) requirements. To find out more or request access, contact us today.

AWS Management Portal for vCenter enables you to migrate VMware VMs to Amazon EC2 and manage AWS resources from within VMware vCenter.

AWS Management Portal for vCenter now includes support for automatic upgrades, the ability to queue imports, and enhanced troubleshooting. Automatic upgrades allow you to receive updates to the portal and our on-premises connector, and take advantage of subsequent feature enhancements that we make, without having to perform updates manually. Included in this release is the ability to queue multiple migrations, effectively eliminating the limit on concurrent migration tasks. You can queue up import tasks to run in the background while you work on other things. You can also send logs to AWS for troubleshooting with the click of a button, leading to a quick resolution of issues.

We are excited to announce the availability of two new features for Amazon CloudSearch: domain metrics through Amazon CloudWatch and index field statistics.

You can now use Amazon CloudSearch domain metrics to make scaling decisions, troubleshoot issues, and better manage your search clusters. Amazon CloudSearch publishes the following four metrics into Amazon CloudWatch:

SuccessfulRequests – Number of search requests successfully processed by the search instance

SearchableDocuments – Number of documents available in the search index

In addition, you can now get aggregate field statistics along with the search results. With field statistics, you can perform charting and analytics without additional post-processing. This feature gives you functionality similar to the Apache Solr Stats component. Field statistics are available only for facet-enabled numeric fields. You can get the following statistics: count, min, max, mean, missing, stddev, sum, and sumOfSquares. To learn more about index field statistics, see Getting Statistics for Numeric Fields in Amazon CloudSearch in the Amazon CloudSearch Developer Guide.

Amazon CloudSearch is a fully managed service that makes it easy to set up, manage, and scale a search solution for your website or application. To get started, open the AWS Management Console and begin your 30-day free trial. To learn more about recently released features, see the Amazon CloudSearch Developer Guide. Share your thoughts on these and any additional features you'd like to see in the CloudSearch forum. We appreciate your feedback, and we use it to help us prioritize upcoming features.

AWS CloudTrail integration with Amazon CloudWatch Logs is now available in 4 additional AWS regions: Sydney, Singapore, Frankfurt and Tokyo. With this feature, you can monitor for specific API activity and receive email notifications when those specific API calls are made.

After you configure CloudTrail integration with CloudWatch Logs, which you can do from the CloudTrail console or using the AWS SDKs or AWS CLI, CloudTrail begins to continuously and automatically deliver all the CloudTrail events associated with API activity to a CloudWatch Logs log group you specify. You can then use this CloudFormation template to create CloudWatch Alarms to monitor for critical network and security related API activity captured by CloudTrail and receive email notifications when those API calls are made. You can use the template as it is or make changes to the template to fit your own scenarios. Refer to the CloudTrail documentation user guide for step by step instructions on creating CloudWatch alarms using the CloudFormation template.

To configure CloudTrail integration with CloudWatch Logs, go to the CloudTrail console. Once you configure the integration, you will incur standard CloudWatch Logs and CloudWatch charges. For more details on pricing, go to CloudWatch pricing page.

The Amazon Cognito Streams feature allows you to automatically stream user identity data from Amazon Cognito to Amazon Kinesis. By streaming user identity data to Amazon Kinesis, you can do real-time processing, create powerful dashboards and gain insights about your user data stored in Amazon Cognito. You can also move this data from Amazon Kinesis to other services such as Amazon S3, Amazon Redshift, and Amazon DynamoDB for storage.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis team has announced sub-second record propagation delay for all customers at no additional cost. All new and existing Kinesis streams will observe significantly reduced end-to-end record propagation delay without requiring any additional configuration.

You can now use the Amazon Mobile Analytics Auto Export feature to automatically export your app event data to Amazon Redshift. With your app event data in Amazon Redshift, you can run SQL queries, build custom dashboards, and gain deep insights about your application usage. Additionally, you can use your existing business intelligence and data warehouse tools to report on your app event data.

Amazon Redshift’s new custom ODBC and JDBC drivers make it easier and faster to connect to and query Amazon Redshift from your Business Intelligence (BI) tool of choice. Amazon Redshift’s JDBC driver features JDBC 4.1 and 4.0 support, a 35% performance gain over open source options, and improved memory management. Amazon Redshift’s ODBC drivers feature ODBC 3.8 support, a 6% performance gain, and better Unicode data and password handling, among other benefits. Additionally, AWS partners Informatica, Microstrategy, Pentaho, Qlik, SAS, and Tableau will be supporting these Redshift drivers with their solutions. For more information please see Connecting to a Cluster in our documentation. If you need to distribute these drivers to your customers or other third parties, please contact us at redshift-pm@amazon.com so we can arrange an appropriate license.

You can now assign an Amazon EMR service role and an Amazon EC2 instance profile to an EMR cluster defined in a pipeline, giving you the ability to limit the overall permissions of the EMR cluster. For example, you can control the access that the EMR service has to communicate on your behalf with other AWS services like EC2 or S3. To use this feature as an existing Data Pipeline customer, you will need to opt-in from the Data Pipeline console. To learn more about assigning an Identity and Access Management (IAM) role to an EMR cluster, visit the documentation.

Amazon EC2 Container Service now supports private Docker repositories and mounting data volumes. You can now authenticate with Docker registries such as Docker Hub allowing you to use Docker images from private repositories in your Task Definitions. Read about it here.

Tag Editor, a cross-service and cross-region tool for managing tags, now supports the ability to search for resources that do not have a certain tag key applied or that have an empty value. AWS customers use tags to organize and find resources with Resource Groups and also to categorize and track AWS costs. The new “Not tagged” and “Empty value” search features help customers audit tags on their account’s resources and ensure that correct tags are applied to every resource. Try out the new features today when searching in the Tag Editor by selecting “Not tagged” or “Empty value” in the tag value search field.

Cost Explorer, a tool that lets you analyze your historical AWS spend data with a graphical interface, now includes additional dimensions on which to filter and group costs. These additional dimensions allow Cost Explorer users to visualize more detailed cost data and gain deeper insights into their AWS costs and cost drivers. Specifically, with this launch, Cost Explorer users can now group costs not only by Service and Linked Account, but also by Availability Zone, Purchase Option ("Reserved" or "Non-Reserved"), and API Operation. In addition, they can filter costs by all of those dimensions and multiple Tag Keys (in the past, they could only filter by one Tag Key at a time).

AWS Training and Certification is pleased to announce public availability of the AWS Certified DevOps Engineer - Professional exam. We have also awarded our first DevOps Engineer certifications to those who successfully completed the beta exam. This certification helps IT professionals demonstrate their skills in the DevOps area and allows employers to identify qualified candidates to lead DevOps initiatives. The AWS Certified DevOps Engineer - Professional certification is the next step in the path for AWS Certified Developers and SysOps Administrators. It validates technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform. The exam is intended to identify individuals who are capable of implementing and managing systems which are highly available, scalable, and self-healing on the AWS platform. You must already be certified as an AWS Certified Developer - Associate or AWS Certified SysOps Administrator – Associate before you are eligible to take this exam. Find the Exam Guide, a practice exam, and other resources at aws.amazon.com/certification

AWS Security Token Service (STS), a service that enables your applications to request temporary security credentials, is now available in every AWS region. By bringing AWS STS to a region geographically closer to your applications and services, your applications and services can call AWS STS with lower latencies and take advantage of the multiregional resiliency provided by the new regional AWS STS endpoints. Today’s launch also gives AWS account administrators greater control over where apps can request temporary security credentials by allowing administrators to activate or deactivate any of the new AWS STS endpoints.

You can now easily join your Amazon EC2 for Windows instances to a domain that you have configured with AWS Directory Service. You can join an instance to an existing, on-premises Active Directory, using AD Connector, or a stand-alone, Simple AD directory running in the AWS Cloud. Once you configure this new feature using the AWS Management Console or the EC2 API, you can choose which domain a new instance will join when it launches. For existing instances, you can use the EC2 API to seamlessly join them to a domain.

Environment Cloning AWS Elastic Beanstalk now supports environment cloning. Previously, you would have to create a new environment and manually configure the variables and options to match your previous environment. You can now use the Management Console or CLI to clone an existing environment with a few clicks. Elastic Beanstalk will provision the exact same resources as your selected environment and configure them the same way. We release new versions of Elastic Beanstalk environments with security updates regularly. This feature will make it easier to upgrade your environments to new versions. Read more about this feature here.

You can now choose the currency that you would like to use to pay your AWS bill. You can select from a choice of Australian Dollars, Swiss Francs, Danish Kroner, Euros, British Pounds, Hong Kong Dollars, Japanese Yen, Norwegian Kroner, New Zealand Dollars, Swedish Kronor, and South African Rand. If your credit card provider currently charges you expensive fees for converting currency, choosing to be billed in your preferred currency may help you to reduce these costs. You can compare our rates, which are displayed on the Account Settings page of the AWS Billing Console, with your credit card statements to determine if using our currency conversion service would benefit you.

Today, AWS Identity and Access Management (IAM) added support for managed policies, an easier way to manage permissions. When you attach a managed policy to multiple IAM entities (users, groups, and roles), the permissions specified in that policy and any subsequent updates apply to all IAM entities to which the policy is attached. Managed policies enable you to see the history of each of your policies and roll back to previous versions of policies.

You can now run your Amazon CloudSearch domains on M3 instances, which provide higher, more consistent compute performance for most use-cases than previous generation instances. Starting today, you can create new CloudSearch domains on M3 instances. You can convert your existing CloudSearch domains to the M3 instance type by modifying the “Desired Instance Type” setting using the AWS Console, SDK, or CLI.

To learn more about using M3 instances with CloudSearch domains, see Configuring Scaling Options in the Amazon CloudSearch Developer Guide. For more information about pricing for M3 instance types, please refer to the Amazon CloudSearch pricing page.

Amazon CloudSearch is a fully managed search service that makes it easy to set up, manage, and scale a search solution for your website or application. To learn more, see Amazon CloudSearch overview.

Amazon DynamoDB allows you to retrieve all items from a table by using the Scan operation. With Secondary Index Scan, you can now use the Scan operation on secondary indexes and retrieve all data from select attributes and items that are projected on a secondary index. Secondary Index Scan works on global and local secondary indexes. Secondary Indexes can be scanned from the DynamoDB console or by calling the Scan API with an additional parameter to specify the index. To learn more, please read our blog on Secondary Index Scan. Also, visit our documentation page to learn additional technical and operational details.

All API call made to AWS Config are now logged through AWS CloudTrail. Customers can easily determine who made changes to AWS Config settings, and who is accessing historical configuration data recorded by AWS Config.

AWS Config is now available for customers and partners to use in US East (N.Virginia), US West (Oregon), EU (Ireland) and Asia Pacific (Sydney). Customers can now monitor resources for configuration changes in these regions and choose to aggregate configuration data into a common Amazon S3 bucket, or aggregate configuration changes delivered to different Amazon SNS topics into a common Amazon SQS queue.

We are pleased to announce that you can now add tags to your Amazon ElastiCache clusters and snapshots. A tag is a user-defined label expressed as a key-value pair that helps organize AWS resources, and identify them as business-specific groupings such as application names or projects. For example, developers can use tags to make it easier to allocate costs and optimize spending by tagging their ElastiCache resources with details such as a cost center, or the name of an administrator or project.

The version 2 of the AWS SDK for Ruby is now stable and generally available. We have received great feedback from customers and made a number of improvements on the SDK since its initial preview release. Now fully equipped with stable Resource APIs, waiters, response pagination, and more, the AWS SDK for Ruby will help you write scalable Ruby applications that can be integrated with AWS services with ease.

You can now capture information on success rates, failure rates and dwell times for mobile push notifications using Amazon SNS Delivery Status. These metrics provide insights such as whether your push notifications have been successfully delivered to the intended messaging platform and the time it took for the notifications to be delivered. In addition to collecting status information, you can also trigger alerts based on metrics you define in Amazon CloudWatch.

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory key-value store in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

Amazon Zocalo is now Amazon WorkDocs. The name of the service has changed, but all of the functionality remains the same. Today, we introduced Amazon WorkMail, a secure, managed business email and calendaring service with support for existing desktop and mobile email clients. We now offer three enterprise IT applications for end-users that all work together: Amazon WorkMail (secure email and calendaring), Amazon WorkSpaces (virtual desktops), and Amazon WorkDocs (file storage and sharing). We are renaming Amazon Zocalo to make our service naming more consistent and familiar to you.

You will start to see the name changed to Amazon WorkDocs in the console today, and changes to other components, such as the mobile apps, will be complete in the next few weeks.

Amazon WorkMail is a secure, managed business email and calendaring service with support for existing desktop and mobile email clients. Amazon WorkMail gives users the ability to seamlessly access their email, contacts, and calendars using Microsoft Outlook, their web browser, or their native iOS and Android email applications. You can integrate Amazon WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location your data is stored.

Amazon DynamoDB allows you to create Global Secondary Indexes (GSI) at table create time. GSIs enable you to write rich queries with filters. With online indexing, you can add or delete GSIs to a DynamoDB table at any time using the DynamoDB console or via a simple API call. While the GSI is being added or deleted, the DynamoDB table can still handle live traffic and provide continuous service at the provisioned throughput level. To learn more, please read Jeff Barr’s blog post on Online Indexing. You can also learn more about online indexing by reading our documentation page.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources. Developers can write stream processing applications with the Kinesis Client Library that take action on real-time data such as web site click-streams, financial transaction data, social media feeds, IT logs, location-tracking events, and more. Amazon Kinesis-enabled applications can power real-time dashboards, generate alerts, drive real-time business decisions such as changing pricing and advertising strategies, or emit data to other big data services such as Amazon Simple Storage Service (Amazon S3), Amazon Elastic Map Reduce (Amazon EMR), and Amazon Redshift.

We are excited to announce a new edge location in Seoul, Korea for Amazon CloudFront and Route 53. This is the second edge location in Seoul, Korea and brings the total number of worldwide edge locations to 53. The new edge location helps improve performance and availability for end users of your application and supports all Amazon CloudFront and Amazon Route 53 features at no additional cost.

We are pleased to announce Amazon RDS is now integrated with AWS CloudTrail in the AWS GovCloud (US) region. AWS CloudTrail is a service that records AWS API calls for your account and delivers log files to you. With CloudTrail, you can get a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS CloudFormation). The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing.

You can now control the security rules for your cluster by specifying your own Amazon EC2 security groups. Customizing the security groups for Amazon EMR, you can prevent communication between clusters, grant an external application access to one cluster but not another, and apply multiple security groups to a given cluster. To learn more, visit the documentation.

AWS Trusted Advisor is pleased to announce the expanded availability of the Action Link feature. Action links are hyperlinks to the AWS Management Console, where you can take action on the Trusted Advisor recommendations. Action links were introduced in July 2014 on a limited number of checks. Action links are now available on all checks where links are supported by the relevant service.

Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. If your app's steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.

Now you can import on-premises virtual machines to the AWS Cloud and create Linux and Microsoft Windows EC2 instances using the AWS Systems Manager for Microsoft System Center Virtual Machine Manager (SCVMM) v1.5.

This new Quick Start reference deployment covers the implementation of Microsoft Exchange Server 2013 in a highly available architecture on the AWS cloud, using AWS services such as Amazon EC2 and Amazon VPC. The automated AWS CloudFormation templates included in the Quick Start build the minimal infrastructure required to run Microsoft Exchange Server 2013 on AWS with high availability for a small deployment supporting 250 mailboxes, following AWS best practices. We also provide guidance for additional deployment scenarios aligned with the Microsoft preferred architecture for Exchange Server, supporting 250, 2,500, and 10,000 mailboxes. The deployment guide discusses how the Microsoft Exchange Server environment is built so that you can deploy the automated solution or customize the provided AWS CloudFormation template to meet your own requirements.

We are happy to announce support for monitoring JSON-formatted logs with CloudWatch Logs. This capability enables you to create graphs and receive notifications when your JSON-formatted log events contain terms or match conditions that you choose. For example, you can use CloudWatch to notify you when specific fields occur in your JSON log events or to create time-series graphs of values from your JSON log events.

AWS Training & Certification has released a new half-day workshop to help individuals who are preparing for the AWS Certified Solutions Architect – Associate exam. In this workshop, we review what to expect at the testing center and while taking the exam. We walk you through how the exam is structured, including question formats, content domains, and the breakdown of questions across those domains. We also teach you how to interpret the concepts being tested by a question so that you can better eliminate incorrect responses. During the workshop, you will have the chance to apply knowledge and test concepts through a series of practice exam questions. Designed to complement the Architecting on AWS course, it is recommended that you complete that course before attending this workshop. Learn more and register for the AWS Certification Exam Readiness Workshop.

Today we are announcing the following features and workflow improvements for Amazon Cognito.

We added the ability to write to the Cognito sync store for Developer Credentials. For example, game developers can run backend processes to give game players prizes by setting a flag in the player’s Cognito profile.

We added a new console interface that allows you to browse and edit identity data in the console.

We simplified the process of setting up roles for your identities. You can now access the starter code from the identity pool dashboard in the Management Console and initialize the Cognito SDK without passing the role Amazon Resource Names (ARNs).

Amazon ElastiCache is now available in the EU (Frankfurt) region. Customers can use the service to create fast, managed and reliable in-memory key value stores in the second European AWS location. To learn more about ElastiCache please visit the product page.

Auto Scaling and Elastic Load Balancing now support ClassicLink, enabling you to launch EC2-Classic instances that are linked to a VPC into an Auto Scaling group. EC2-Classic instances can also be registered with Elastic Load Balancers in the VPC.

AWS Directory Service is now available in the Asia Pacific (Singapore) region, making it possible for customers to use the additional capabilities provided by AWS Directory Service in this region. This additional region will provide the ability to create Simple AD and AD Connector directories in VPCs in this region as well as access the AWS Management Console with their Simple AD directory credentials or on-premises directory credentials connected through AD Connector. AWS Directory Service is now available in the US East (N. Virginia), US West (Oregon), EU (Ireland), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore) AWS Regions.

The AWS Lambda Preview is now available to all AWS customers in the supported regions with no additional sign-up. Now you can just sign in to the AWS Management Console and get started using AWS Lambda.

You can now easily generate protected HLS streams with Amazon Elastic Transcoder and deliver them with Amazon CloudFront. With content protection for HLS, Elastic Transcoder uses encryption keys supplied by you, or generates keys on your behalf. Both methods use the AWS Key Management Service to protect the security of your keys.

Reserved capacity offers significant savings over the normal price of DynamoDB provisioned throughput capacity. When you buy reserved capacity, you pay a one-time upfront fee and commit to paying for a minimum usage level, at much lower hourly rates.

Amazon Kinesis is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources.

Auto Recovery is a new Amazon EC2 feature that is designed to increase instance availability. You can automatically recover supported instances when a system impairment is detected. Auto Recovery keeps your existing instance running and automatically recovers your instance on new underlying hardware, if needed, so you do not generally need to migrate to a new instance.

You can now launch C4 instances, the latest generation of Amazon EC2 Compute-optimized instances. C4 instances are designed for compute-bound workloads, such as high-traffic front-end fleets, MMO gaming, media processing, transcoding, and High Performance Computing (HPC) applications.

Amazon Virtual Private Cloud (VPC) ClassicLink allows Amazon Elastic Compute Cloud (EC2) instances in the EC2-Classic platform to communicate with instances in a VPC using private IP addresses. ClassicLink lets you associate VPC Security Groups with instances on EC2-Classic. All the rules of your VPC Security Group will apply to communications between instances in EC2-Classic and instances in the VPC.

AWS CloudHSM is now integrated with Amazon RDS for Oracle. With this new capability, you can let AWS operate your Oracle databases while maintaining control of the master encryption keys. The AWS CloudHSM service helps you meet compliance requirements for data security by making dedicated, single tenant Hardware Security Module (HSM) appliances available within the AWS cloud. This feature allows you to maintain control of the master encryption keys in CloudHSM instances when encrypting Amazon RDS databases with Oracle Transparent Data Encryption (TDE).

Using AWS CloudHSM Classic, you can now maintain sole and exclusive control of the encryption keys you use to manage Oracle Transparent Data Encryption (TDE) in Amazon RDS database instances. AWS CloudHSM Classic offers single-tenant Hardware Security Module (HSM) appliances within the AWS Cloud. You can securely generate, store, and manage the cryptographic keys used for data encryption such that they are accessible only by you. By protecting your keys in hardware and preventing them from being accessed by third parties, AWS CloudHSM Classic can help you comply with the most stringent regulatory and contractual requirements for key protection.

The new Resource APIs for AWS SDK for PHP is an extension library to the SDK that introduces PHP developers to a more intuitive, object-oriented perspective in working with AWS resources. While the SDK's traditional request-response or RPC-style clients provide granular and explicit control over the network calls made by the SDK, Resource APIs represent AWS resources as PHP objects with methods and attributes that can be called and accessed directly on the object rather than passing parameters to and from client operations.

The new Resource APIs for AWS SDK for .NET introduce a more intuitive and object-oriented way of working with AWS services for .NET developers. Instead of the traditional request-response coding pattern, Resource APIs represent AWS resources as .NET objects with methods that map to API actions and attributes that are automatically loaded from the service upon first-time access. Also, Resource APIs abstract out pagination in "List*" operations and allow you to easily iterate over large resource sets.

We are pleased to announce you can now switch between accounts in the AWS Management Console, without providing user name and password every time you want to switch. To get started, you create an AWS Identity and Access Management (IAM) role in the account you want to switch to. Then, when you want to manage that account you just specify the account/IAM role combination on the “Switch Role” page in the AWS Management Console and you will immediately be switched to that account.

Amazon RDS now allows you to encrypt your MySQL or PostgreSQL databases using keys you manage through AWS Key Management Service (KMS). On a database instance running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read replicas, and snapshots. Encryption and decryption are handled transparently so you don’t have to modify your application to access your data. When you create a new MySQL or PostgreSQL database instance, you can choose to enable encryption via the AWS Management Console or API. You may use the default RDS key automatically created in your account or use a key you created using KMS to encrypt your data. For more information about the use of AWS Key Management Service with Amazon RDS, see the Amazon RDS User's Guide. To learn more about AWS KMS, visit the AWS KMS overview page.

You can now receive two-minute warnings before your Amazon EC2 Spot Instances are due to be terminated. Spot Instance termination notices are a new feature that can help you manage Spot interruptions by giving your applications time to prepare for a graceful shut down (e.g., by checkpointing important data to persistent storage).