With Amazon EC2 support for CORS requests, you can now build rich two-tier web applications to manage your instances, VPCs, and more using the AWS SDK for JavaScript in the Browser. Check out our API documentation for details about how to use the API.

We hope you are excited to use use the Amazon EC2 API directly from the browser. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

Amazon CloudFront allows you to use signed URLs to restrict access to content. This allows you to securely serve private content, or content intended for selected users using CloudFront. Read more about how CloudFront signed URLs work.

This article describes how to generate Amazon CloudFront signed URLs in Node.js.

Amazon Cognito has since simplified the authentication workflow. This article describes authenticating the SDK in the browser using Amazon Cognito and supported public identity providers like Google, Facebook, and Amazon.

Step 1 and Step 2 outline registering your application with a public identity provider, and creating a Cognito identity pool. These steps typically need to be performed only once.

Step 3 and Step 4 describe the authentication workflow of a client application using a public identity provider with Amazon Cognito.

Step 1: Set up a public identity provider

Amazon Cognito supports Facebook, Google, Amazon, and any other OpenID Connect compliant provider. As a first step you will have to register your application with a public identity provider. Here is a list of popular providers:

Step 2: Create a Cognito Identity Pool

To begin using Amazon Cognito you will need to set up an identity pool. An identity pool is a store of user identity data specific to your account. The easiest way to setup an identity pool is to use the Amazon Cognito console.

The New Identity Pool wizard will guide you through the configuration process. When creating your identity pool, make sure that you enable access to unauthenticated identities. At this time, you can also configure any public identity providers that you have setup in Step 1.

The wizard will then create authenticated and unauthenticated roles for you with very limited permissions. You can edit these roles later using the IAM console. Note that Amazon Cognito will use these roles to grant authenticated and unauthenticated access to your resources, so scope them accordingly.

Step 3: Starting with Unauthenticated Access to Resources

You may want to grant unauthenticated users read-only access to some resources. These permissions should be configured in the IAM role for unauthenticated access (the role created in Step 2).

Step 4: Switching to Authenticated Access

You can also use the public identity provider configured in Step 1 to provide authenticated access to your resources.

When a user of your application authenticates with a public identity provider, the response contains a login token that must be supplied to Amazon Cognito in exchange for temporary credentials.

For Facebook and Amazon, this token is available at the access_token property of the response data. For Google and any other OpenID provider, this token is available at the id_token property of the response.

Refreshing credentials

After a user has authenticated with a public identity provider, you will need to update your credential provider with the login token from the authentication response.

// access_token received in the authentication response
// from Facebook
creds.params.Logins = {};
creds.params.Logins['graph.facebook.com'] = access_token;
// Explicitly expire credentials so they are refreshed
// on the next request.
creds.expired = true;

The credential provider will refresh credentials when the next request is made.

Persisting authentication tokens

In most cases, the SDKs of public identity providers have built-in mechanisms for caching the login tokens for the duration of the session. For example, the Login with Amazon SDK for JavaScript will cache the access_token and subsequent amazon.Login.authorize() calls will return the cached token as long as the session is valid.

The Facebook SDK for JavaScript exposes a FB.getLoginStatus() method which allows you to check the status of the login session.

The Google APIs Client Library for JavaScript automatically sets the OAuth 2.0 token for your application with the gapi.auth.setToken() method. This token can be retrieved using the gapi.auth.getToken() method.

You can also implement your own caching mechanism for login tokens, if these default mechanisms are insufficient for your use case.

Wrapping up

This article describes how to grant access to your AWS resources by using Amazon Cognito with public identity providers. We hope this article helps you easily authenticate users in your web applications with Amazon Cognito. We’d love to hear more about how you’re using the SDK in your browser applications, so leave us a comment or tweet about it @awsforjs.

The AWS SDK for JavaScript now has support for Amazon S3 Requester Pays buckets.

With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data. This allows bucket owners to share the operational cost of their buckets.

Support for Requester Pays buckets was recently added in version 2.1.19 of the SDK. This article describes how to use Requester Pays buckets with the AWS SDK for JavaScript.

Example

The only valid value for the RequestPayer parameter is requester. If the RequestPayer parameter is not set for a request made on a Requester Pays bucket, then Amazon S3 returns a 403 error and the bucket owner is charged for the request.

Conclusion

The AWS SDK for JavaScript supports Requester Pays for all operations on objects. We hope this feature in the SDK allows you to more easily manage your bucket permissions and cost. We’re eager to know what you think, so leave us a comment or tweet about it @awsforjs.

The AWS SDK for JavaScript can be configured to work from behind a network proxy. In browsers, proxy connections are transparently managed, and the SDK works out of the box without any additional configuration. This article focuses on using the SDK in Node.js from behind a proxy.

Node.js itself has no low-level support for proxies, so in order to configure the SDK to work with a proxy, you will need override the default http.Agent.

This article shows you how to override the default http.Agent with the proxy-agent npm module. Note that some http.Agent modules may not support all proxies. You can visit npmjs.com for a list of available http.Agent libraries that support proxies.

Overriding the default http.Agent is simple, and allows you to configure proxy settings that SDK can use. We hope you find this information useful and are able to easily use the SDK in Node.js from behind a proxy!

Today’s release of the AWS SDK for JavaScript (v2.1.0) contains support for a new uploading abstraction in the AWS.S3 service that allows large buffers, blobs, or streams to be uploaded more easily and efficiently, both in Node.js and in the browser. We’re excited to share some details on this new feature in this post.

The new AWS.S3.upload() function intelligently detects when a buffer or stream can be split up into multiple parts and sent to S3 as a multipart upload. This provides a number of benefits:

It enables much more robust upload operations, because if uploading of a single part fails, that individual part can be retried separately without requiring the whole payload to be resent.

Multiple parts can be queued and sent in parallel, allowing for much faster uploads when enough bandwidth is available.

Most importantly, due to the way the managed uploader buffers data in memory using multiple parts, this abstraction does not need to know the full size of the stream, even though Amazon S3 typically requires a content length when uploading data. This enables many common Node.js streaming workflows (like piping a file through a compression stream) to be used natively with the SDK.

A Node.js Example

Let’s see how you can leverage the managed upload abstraction by uploading a stream of unknown size. In this case, we will be piping contents from a file (bigfile) from disk through a gzip compression stream, effectively sending compressed bytes to S3. This can be done easily in the SDK using upload():

A Browser Example

Important Note: In order to support large file uploads in the browser, you must ensure that your CORS configuration exposes the ETag header; otherwise, your multipart uploads will not succeed. See the guide for more information on how to expose this header.

All of this works in the browser too! The only difference here is that, in the browser, we will probably be dealing with File objects instead:

Tracking Total Progress

In addition to simply uploading files, the managed uploader can also keep track of total progress across all parts. This is done by listening to the httpUploadProgress event, similar to the way you would do it with normal request objects:

Note that evt.total might be undefined for streams of unknown size until the entire stream has been chunked and the last parts are being queued.

Configuring Concurrency and Part Size

Since the uploader provides concurrency and part size management, these values can also be configured to tune performance. In order to do this, you can provide an options map containing the queueSize and partSize to control these respective features. For example, if you wanted to buffer 10 megabyte chunks and reduce concurrency down to 2, you could specify it as follows:

Handling Failures

Finally, you can also control how parts are cleaned up in the managed uploader when a failure occurs using the leavePartsOnError option. By default, the managed uploader will attempt to abort a multipart upload if any individual part fails to upload, but if you would prefer to handle this failure manually (i.e., by attempting to recover more aggressively), you can set leavePartsOnError to true:

AWS re:Invent 2014 concluded on Friday, November 14, 2014; here is a summary of an action-packed, fun-filled week!

New features and services

There were four new services and features announced at the Day 1 Keynote, two of which are available to all customers effective November 12, 2014. These include:

Amazon RDS for Aurora (a service in preview), a MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases.

AWS CodeDeploy, a service that automates code deployments to Amazon EC2 instances, thereby eliminating the need for error-prone manual operations.

AWS Key Management Service, a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys.

AWS Config (a service in preview) is a managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.

There were more new features and services announced at the Day 2 Keynote. These include:

Amazon EC2 Container Service (a service in preview), a high-performance container management service that supports Docker containers and allows you to easily run distributed applications on a managed cluster of EC2 instances.

AWS Lambda (a service in preview), a compute service that runs your code in response to events and automatically manages the compute resources for you, making it easy to build applications that respond quickly to new information.

Breakout sessions

AWS re:Invent is a learning conference at its core, and there were over 200 sessions spread over three days, on a variety of topics ranging from application deployment and management and architecture, to education, gaming, and financial services.

I had the privilege of presenting one such session, “Building Cross-Platform Applications Using the AWS SDK for JavaScript.” The goal of this talk was to highlight the portability of the AWS SDK for JavaScript and how the SDK’s features support cross-platform application development. You can watch the talk, as well as refer to the slides posted.

Separating the application’s business logic from presentation promotes code reuse, and allows you to easily port the application to multiple platforms by changing only the presentation layer.

To demonstrate this, I wrote an application in Node.js and ported it to a Google Chrome extension, as well as a Windows RT application. The source code for this application is available here.

There were many more fantastic presentations at re:Invent 2014, for which the videos, slides, and audio podcasts are available online.

See you again next year

We hope re:Invent 2014 presented an opportunity for you to learn from the many talks and workshops that happened over the course of a week. We will continue innovating on your behalf and will have more exciting stuff to show you, so stay tuned by subscribing to email updates on the latest AWS re:Invent announcements!

Amazon Cognito is a great new service that enables a much easier workflow for authenticating with your AWS resources in the browser. Although web identity federation still works directly with identity providers, using the new AWS.CognitoIdentityCredentials gives you the ability to provide access to customers through any identity provider using the same simple workflow and fewer roles. In addition, you can also now provide access to unauthenticated users without having to embed read-only credentials into your application. Let’s look at how easy it is to get set up with Cognito!

Setting Up Identity Providers

Cognito still relies on third-party identity providers to authenticate the identity of the user logging into your site. Currently, Cognito supports Login with Amazon, Facebook, and Google, though you can also setup developer authenticated identities or OpenID Connect. You can visit the respective links to setup applications with these providers which you will then use to get tokens for your application as normal. After you have created the applications with the respective providers, you will want to create an identity pool with Amazon Cognito.

Creating an Identity Pool

Cognito’s “identity pool” is the core resource that groups the users in your application. Effectively, each user logging into your application will get a Cognito ID; these IDs are all created and managed from your identity pool. The identity pool needs to know which identity providers it can hand out IDs to, so this is something we configure when creating the pool.

The easiest way to create an identity pool is through the Amazon Cognito console, which offers a good walkthrough of the necessary parameters (the name, your identity provider application IDs, and whether or not you want to support unauthenticated login). In our example we will use the SDK to create an identity pool with support for Amazon logins and unauthenticated access only. We will call the pool “MyApplicationPool”:

This will print an IdentityPoolId, which we can now use to get a Cognito ID for our user when logging in.

Authenticated vs. Unauthenticated Roles

Now that we’ve configured our pool, we can start talking about logging the user in. There are two ways for a user to login to your application when you enable unauthenticated access:

Authenticated mode — this happens when a user goes through the regular Amazon, Facebook, or Google login flow. This type of login is considered authenticated because the user has proven their identity to the respective provider.

Unauthenticated mode — this happens when a user attempts to get a Cognito ID for your pool but without having authenticated with an identity provider. If you enabled AllowUnauthenticatedIdentities, this will be allowed in your application. This is useful when your application has features you want to expose to users without requiring login, like, say, being able to view comments in a blog.

The way we separate these two “modes” in AWS is through IAM roles. We are going to create two roles for our application matching these two types of users. First, the authenticated role:

Authenticated Role

The easiest way to create a role is through the IAM Console (click “Create Role”). Let’s create a role called “MyApplication-CognitoAuthenticated”, choose a “Role for Identity Provider Access”, granting access to web identity providers. You should now be in “Step 3: Establish Trust”. This is the most important part; it is where we limit this role only to authenticated Cognito users. Here’s how we do it:

Select Amazon Cognito and enter the Identity Pool Id

Be sure to click Add Conditions to add an extra condition.

Set the following fields:

Condition: “StringEquals”

Key: “cognito-identity.amazonaws.com:amr”

Condition Prefix: “For any value”

Value: “authenticated”

This will permit only authenticated users to use this role. The following example is what your trust policy should look like after clicking Next:

Next, set the correct role permissions for this authenticated user. This step will depend heavily on your application and the actual resources you want to expose to authenticated users. You can use the policy generator here to help you create a policy that is scoped to specific actions and resources.

This should give you the correct authenticated role for use in your application.

Unauthenticated Role

This role should be configured almost like the authenticated role except for a few differences:

We give it a separate name like “MyApplication-CognitoUnauthenticated”

We set the Trust Policy condition value to “unauthenticated” instead of authenticated.

The permissions on the role should be much more restrictive, and typically only exposing read-only operations (like for reading comments in a blog).

Once you’ve created these roles you should be able to copy down both role ARNs. These two ARNs will be used when logging into the application under the respective “modes”.

Logging In

We’ve now created all the resources we need to login to our application, so let’s see the code used in the AWS SDK for JavaScript to enable this login flow.

Initially, you will likely want all users to login under “unauthenticated” mode. This can be done up front when configuring your application:

Eventually your customer will authenticate with the associated Amazon application. In that case, you will want to update your credentials to reflect the new role and provide the web token. A portion of the Login with Amazon code might look like:

Getting the Identity ID

Occasionally, you will need the user’s Cognito identity ID to provide to requests. Since the ID is a unique ID for the user, it’s likely you will use this token for things like the user’s identifier in your application. In general, you can get the ID from the identityId property on the credentials object, but it’s likely you will want to use a recently refreshed credentials object prior to accessing the property. To verify that the ID is up to date, wrap this access inside of the get() call on a credentials object, for example:

Wrapping Up

That’s all there is to using Amazon Cognito for authenticated and unauthenticated access inside of your application. Using Cognito gives your application plenty of other benefits, such as the ability to merge a user’s identity if they choose to login with multiple identity providers (a common feature request), as well as the ability to take advantage of the Amazon Cognito Sync Manager to easily sync any user data inside of your application.

Take a look at Cognito login in the AWS SDK for JavaScript and let us know what you think!

AWS re:Invent is just around the corner, and we are excited to meet you.

I will be presenting DEV 306 – Building cross platform applications using the AWS SDK for JavaScript on November 13, 2014. This talk will introduce you to building portable applications using the SDK and outline some differences in porting your application to multiple platforms. You can learn more about the talk here. Come check it out!

We will also be at the AWS Booth in the Expo Hall (map). Come talk to us about how you’re using AWS services, ask us a question, and learn about how to use our many AWS SDKs and tools.

Introducing the AWS SDK for JavaScript Blog

Today we’re announcing a new blog for the AWS SDK for JavaScript. On this blog, we will be sharing the latest tips, tricks, and best practices when using the SDK. We will also keep you up to date on new developments in the SDK and share information on upcoming features. Ultimately, this blog is a place for us to reach out and get feedback from you, our developers, in order to make our SDK even better.

We’re excited to finally start writing about the work we’ve done and will be doing in the future; there’s a lot of content to share. In the meantime, here’s a little primer on the AWS SDK for JavaScript, if you haven’t had the chance to kick its tires.

Works in Node.js and Modern Browsers

The SDK is designed to work seamlessly across Node.js and browser environments. With the exception of a few environment-specific integration points (like streams and file access in Node.js, and Blob support in the browser), we attempt to make all SDK API calls work the same way across all of your different applications. One of my favorite features is the ability to take snippets of SDK code and move them from Node.js to the browser and back with at most a few changes in code.

In the browser, you can use a hosted script tag to install the SDK or build your own version. More details on this can be found in our guide.

Full Service Coverage

The SDK has support for all the AWS services you want to use, and we keep it up to date with new API updates as they are released. Note that although some services in the browser require CORS to work over the web, we are continually working to expand the list of CORS-supported services. You can also take advantage of JavaScript in various local environments (Chrome and Firefox extensions, iOS, Android, WinRT and other mobile applications), which do not enforce CORS and develop your AWS-backed applications there; and you can do that with a custom build of the SDK today.

Open Source

Finally, the thing about the SDK that excites me the most is the fact that the entire SDK is openly developed and shared on GitHub. Feel free to check out the SDK code, post issue reports, and even submit pull requests with fixes or new features. Our SDK depends on feedback from our developers, so we love to get reports and pull requests. Send away!

More to Come

We will be posting much more information about the SDK on this blog. We have plenty of exciting things to share with you about new features and improvements to the library. Bookmark this blog and check back soon as we publish more information!