Search This Blog

What the AWS Summit Chicago Updates mean for you

Amazon just wrapped up their AWS Chicago Summit, and they’ve dropped a few important notes that developers should be aware of. As I speculated previously, this year Amazon is choosing to focus on improving existing services, of which they too many of already, instead of bringing out new services.

AWS Elastic Beanstalk is a great service that automatically manages auto-scaling, provisioning load balancers, and can even automatically provision databases for your application. It’s a step below Lambda, allowing developers to bring their own application in any programming language (through the use of Docker) with almost any application without having to worry about managing servers or even thinking about infrastructure.

Because it still runs using traditional EC2 instances that are started automatically, the system does occasionally need to perform updates. In the past to upgrade to a new AMI developers had to create a new environment, verify the new environment, then swap the URLs, confirm it was still working, and terminate the old environment. All of those steps had to happen manually by a developer performing each step through the AWS console or API.

Next, Amazon allowed developers to update an environment with just one click. When a new AMI is available, a developer could just click “change” on the environment and choose the most recent AMI. Amazon would automatically launch new instances, verify they were working, then terminate the old ones. While significantly better, this still required manual interaction, even for top security patches. A developer not paying attention could easily miss an update and leave a system vulnerable to attack.

Today, Amazon announced “Managed Platform Updates”. Effectively, the same level of automatic updates and patches that an RDS environment has is now available for every Beanstalk instance. This means developers can specify a maintenance window, when the servers are allowed to be upgraded, and have Amazon automatically apply patches and updates. This makes Beanstalk fully managed, just like RDS but for an entire application stack.

Anyone who has ever worked with PHP, Apache, or MySQL web applications is aware of how quickly vulnerabilities can come out and become massively widespread. Security and Vulnerability testing is very standard among many of these types of applications because they are very highly used by developers, and so they are very quickly the target of attackers. Every type of system is vulnerable, and it’s nearly impossible to keep up with all of the various security updates manually.

To solve this problem, many developers rely on third-party software that automatically checks their systems for vulnerabilities, similar to how consumers would actively run virus scanning software to protect against known threats. Amazon Inspector is the official AWS product to handle automatic detection of common security and vulnerabilities announced by the security community for your web application. Since it’s managed automatically by AWS, it’s very simple to just point it to your application and have it check against the known vulnerabilities that may impact your service.

Today, Amazon has announced that it has entered general availability, which means anyone can use it. It also means it’s officially supported by Amazon, and is considered “Production Ready”.

Amazon Cognito is a beast of a service, allowing developers to generate effectively “server less” backend systems connecting directly to Amazon’s services, without even needing backend code running in Lambda. The power of Cognito is that developers can grant temporary and limited credentials to a front-end application (mobile, desktop, web) that sits on an untrusted client system to access AWS services. This means the credentials need to be very specific in what they offer, and assume that the client is not secure (meaning the end-user may have tampered with it). Cognito's advantage is that developers grant the credentials for very specific purposes, and it can enforce that access with IAM Roles.

However, Cognito was always very difficult to use, seeing as developers had to manage users on the backend or tie into a third-party login system like Google+. While this was OK, it didn’t really allow developers to fully manage users directly in Cognito, there was always the need to store a User record in something like DynamoDB before granting any credentials to the client.

With the new “Your User Pool” concept, this is now completely solved within Cognito itself. It now allows you to specify what information you need from a user and process the signup and identity management entirely within Cognito. This means there is no more need to store a separate “User” record in DynamoDB, since all of those properties can be stored within the User database in Cognito. You can even specify custom attributes, like what type of user, or what the regional currency of the user is. Best of all, it’s by far the easiest approach to require Multi-Factor-Authentication (MFA) for your end users, which is very important in highly secure applications such as Banking, Finance, and Medical services.

Popular Posts

Ever wonder how sites like battle.net support things like this in Google Chrome?

Well I did, so I did a little bit of digging. It turns out Google Chrome supports an open standard called Open Search. This format is relatively simple, and very easy to add to your own site. I just added it to some of our systems in under 5 minutes.

Adding OpenSearch to your site is incredibly simple, you just have to add a simple tag to your index HTML page, and add a simple XML file that it points to. The link tag looks like this:
<link rel="search" type="application/opensearchdescription+xml" href="http://my-site.com/opensearch.xml" title="MySite Search" />

For a while, I have been creating command line tools provided right with boto which I used to manage AWS. Recently, others have become interested in these tools as well, and I've seen several other contributors adding to these tools to make them even more useful to others. One recent submission by Ales Zoulek added some nice features to my list_instances command, which I use on a regular basis to list out the instances that are currently active for my account in EC2.

Amazon now lets you add Tags to EC2 objects such as Instances and Snapshots. This allows you to actually "Name" your EC2 instance, as well as add some metadata that could be used for AMI initialization, etc. Ales added the ability to list these tags by name within the list_instances command line application:

Last week, Amazon announced the launch of a new product, DynamoDB. Within the same day, Mitch Garnaat quickly released support for DynamoDB in Boto. I quickly worked with Mitch to add on some additional features, and work out some of the more interesting quirks that DynamoDB has, such as the provisioned throughput, and what exactly it means to read and write to the database.

One very interesting and confusing part that I discovered was how Amazon actually measures this provisioned throughput. When creating a table (or at any time in the future), you set up a provisioned amount of "Read" and "Write" units individually. At a minimum, you must have at least 5 Read and 5 Write units partitioned. What isn't as clear, however, is that read and write units are measured in terms of 1KB operations. That is, if you're reading a single value that's 5KB, that counts as 5 Read units (same with Write). If you choose to operate in eventually consistent mode, you'r…