You can now use the AWS Import/Export service to import and export data into and out of Amazon S3 buckets in the Asia Pacific (Singapore) Region via portable storage devices with eSATA USB 2.0, and SATA interfaces.

You can ship us a device loaded with data and we'll copy it to the S3 bucket of your choice. Or you send us an empty device and we'll copy the contents of one or more buckets to it. Either way, we'll return the device to you. We work with devices that store up to 8 TB on a very routine basis and can work with larger devices by special arrangement. We can handle data stored on NTFS, ext2, ext3, and FAT32 file systems.

Our customers in the US and Europe have used this service to take on large-scale data migration, content distribution, backup, and disaster recovery challenges.

For example, Malaysia-based AMP Radio Networks runs a web platform for 9 FM radio stations. They host this platform and the associated audio streaming on EC2 and make use of CloudWatch, Auto Scaling, and CloudFront. AWS Import/Export allows AMP Radio Networks to transfer huge amounts of data directly from their facilities to AWS. They save time and money and can bring new content more quickly than they could if they had to upload it to the cloud in the traditional way.

Do you own the rights to some interesting structured or semi-structured data? If so, you may find AWS-powered WebServius to be of interest. They make it easy for you to monetize your data by providing you the ability to sell access via pay-per-use data access APIs and a bulk download process.

Their system is optimized for use with data stored in Amazon SimpleDB. WebServius handles each aspect of monetization process including developer signup, API key management, usage metering, quote enforcement, usage-based pricing, and billing. In short, all of the messy and somewhat mundane details that you must address before you can start making money from you data. You can access the data in three forms (normal and simplified XML formats or JSON).

As a data vendor you have a lot of flexibility with your pricing. You can price your data per row and/or by column. WebServius offers a free plan for low-traffic (up to 50 subscribers or 10,000 calls to the access APIs, rate-limited to 5 calls per second) access and several usage and revenue-based pricing plans for higher traffic and/or paid access to data.

You can see WebServius in action at several sites including Mergent (historical securities pricing, company fundamentals and executives, annual reports, and corporate actions and dividends), Retailigence (retail intelligence and product data), and Compass Marketing Solutions (rich data on over 16 million business establishments in the US).

We've made some improvements to the AWS Portal to make it easier for you to find and access the information that you need to build even more AWS-powered applications. We've added content in several new languages, improved the search features, revamped the discussion forums, and added a number of features to the AWS resource catalogs for AMIs, source code, customer applications, and so forth.

Most of the technical and marketing pages (including the AWS case studies) on the AWS Portal have been translated into French, German, Japanese, and Spanish. You can use the drop-down menu at the top right of each portal page to switch languages:

The portal's brand-new search function is contextual and faceted. You can choose to search the entire AWS site or you can limit your search to just one area using the Search menu:

The search results now include information about where each match was found. You can use the box on the left to further refine your search (searching within results):

The AWS Discussion forum now includes a tag cloud and the editor now supports the use of markdown (simple text to HTML markup). Performance has been markedly improved as well.

The AWS resource catalogs (EC2 AMIs, Sample Code, Developer Tools and so forth) now make use of Amazon S3 and Amazon SimpleDB internally. This work was a prerequisite to some larger and more visible changes that are already on the drawing board:

We have a lot of plans for the AWS Portal in 2011 and beyond, so stay tuned to the blog (or better yet, subscribe to the RSS feed using the big icon at the top right corner).

Here's a tour. There's a new item Reserved DB Instance item in the navigation area:

You can see all of your Reserved DB Instances, you can see the active or inactive instances, and you can also filter by name:

You can initiate your purchase by clicking on the Purchase Reserved DB Instances button and making your selection from the form. It is important to note that you are purchasing the instance for use within a particular AWS Region.

Of course, you have the opportunity to confirm your purchase before it is final:

This new UI should make it even easier for you to use the Relational Database Service and to benefit from the Reserved DB Instance pricing. What are you waiting for?

Whereas prior transcoding solutions waited for the entire video to be uploaded, Transloadit runs the encoding process directly on the incoming data stream, using the popular node.js library to capture each chunk of the file as it is uploaded, piping it directly into the also-popular ffmpeg encoder. Because transcoding is almost always faster than uploading, the video is ready to go shortly after the final block has been uploaded. As Felix noted in his email to me, "Since Ec2 can encode videos much faster than most people can upload them, that essentially cuts the encoding time to 0."

Felix told me that they implemented Transloading using a number of AWS services including EC2, Elastic Load Balancing, Elastic Block Storage, and Amazon S3. They have found that the c1.medium instance type delivers the best price/performance for their application, and are very happy that their Elastic Load Balancer can deliver data to the instances with minimal delay. They are able to deploy data directly to a customer's S3 bucket, and are looking in to Multipart Upload and larger objects.

Located in picturesque Jacksonville, Florida, this new edge location enjoys long walks on the beach and responding to requests for content (CloudFront) and IP addresses (Route 53) from requesters in the southeast United States. This location brings the total number of US locations to 10 and the world-wide total to 17.

You don't need to make any changes to your application or your system configuration in order to benefit from this new piece of AWS infrastructure.

Earlier this year I discussed our plans to allow you to run a wide variety of Oracle applications on Amazon EC2 in the near future. The future is finally here; the following applications are now available as AMIs for use with EC2:

Oracle PeopleSoft CRM 9.1 PeopleTools

Oracle PeopleSoft CRM 9.1 Database

Oracle PeopleSoft ELM 9.1 PeopleTools

Oracle PeopleSoft ELM 9.1 Database

Oracle PeopleSoft FSCM 9.1 PeopleTools

Oracle PeopleSoft FSCM 9.1 Database

Oracle PeopleSoft PS 9.1 PeopleTools

Oracle PeopleSoft PS 9.1 Database

Oracle E-Business Suite 12.1.3 App Tier

Oracle-E-Business-Suite-12.1.3-DB

JD Edwards Enterprise One - ORCLVMDB

JD Edwards Enterprise One - ORCLVMHTML

JD Edwards Enterprise One - ORCLVMENT

The application AMIs are all based on Oracle Linux and run on 64-bit high-memory instances atop the Oracle VM. You can use them as-is or you can create derivative versions tuned to your particular needs. We'll start out in one Region and add more in the near future.

As I noted in my original post, you can use your existing Oracle licenses at no additional license cost or you can acquire new licenses from Oracle. We implemented Oracle VM support on Amazon EC2 with hard partitioning so Oracle's standard partitioned processor licensing models apply.

All of these applications are certified and supported by Oracle. Customers with active Oracle Support and Amazon Premium Support will be able to contact either Amazon or Oracle for support.

If you have invested in virtualization to meet IT security, compliance, or configuration management requirements and are now looking at the cloud as the next step toward the future, I've got some good news for you.

VM Import lets you bring existing VMware images (VMDK files) to Amazon EC2. You can import "system disks" containing bootable operating system images as well as data disks that are not meant to be booted.

This new feature opens the door to a number of migration and disaster recovery scenarios. For example, you could use VM Import to migrate from your on-premises data center to Amazon EC2.

You can start importing 32 and 64 bit Windows Server 2008 SP2 images right now (we support the Standard, Enterprise, and Datacenter editions). We are working to add support for other versions of Windows including Windows Server 2003 and Windows Server 2008 R2. We are also working on support for several Linux distributions including CentOS, RHEL, and SUSE. You can even import images into the Amazon Virtual Private Cloud (VPC).

The import process can be initiated using the VM Import APIs or the command line tools. You'll want to spend some time preparing the image before you upload it. For example, you need to make sure that you've enabled remote desktop access and disabled any anti-virus or intrusion detection systems that are installed (you can enable them again after you are up and running in the cloud). Other image-based security rules should also be double-checked for applicability.

The ec2-import-instance command is used to start the import process for a system disk. You specify the name of the disk image along with the desired Amazon EC2 instance type and parameters (security group, availability zone, VPC, and so forth) and the name of an Amazon S3 bucket. The command will provide you with a task ID for use in the succeed steps of the import process.

The ec2-upload-disk-image command uploads the disk image associated with the given task ID. You'll get upload statistics as the bits make the journey into the cloud. The command will break the upload into multiple parts for efficiency and will automatically retry any failed uploads.

The next step in the import process takes place within the cloud; the time it takes will depend on the size of the uploaded image. You can use the ec2-describe-conversion-tasks command to monitor the progress of this step.

When the upload and subsequent conversion is complete you will have a lovely, gift-wrapped EBS-backed EC2 instance in the "stopped" state. You can then use the ec2-delete-disk-image command to clean up.

The ec2-import-volume command is used to import a data disk, in conjunction with ec2-upload-disk-image. The result of this upload process is an Amazon EBS volume that can be attached to any running EC2 instance in the same Availability Zone.

There's no charge for the conversion process. Upload bandwidth, S3 storage, EBS storage, and Amazon EC2 time (to run the imported image) are all charged at the usual rates. When you import and run a Windows server you will pay the standard AWS prices for Windows instances.

As is often the case with AWS, we have a long roadmap for this feature. For example, we plan to add support for additional operating systems and virtualization formats along with a plugin for VMware's vSphere console (if you would like to help us test the plugin prior to release, please let us know at ec2-vm-import-plugin-preview@amazon.com). We'll use your feedback to help us to shape and prioritize our roadmap, so keep those cards and letters coming.

I became aware of DNS30 during the private beta test of Amazon Route 53.I own quite a few personal domains for use by me and my family and I was looking forward to managing all of the DNS entries from one location.

I tried it out and it worked just fine. In order to use DNS30 you must supply it with a set of AWS credentials. I am generally reluctant to advise others to do this because the AWS Account credentials have complete access to all of the AWS resources and API calls.

After a recent hallway conversation with the manager responsible for AWS Identify and Access Management (IAM) I decided to see if I could use IAM to enable DNS30 to use Route 53 but nothing else. It turned out to be really easy. IAM gives me the ability to create users under my AWS Account and to manage the permissions for each of these users. It turned out to be really easy.

I installed the IAM command line toolkit and used the iam-usercreate command to create a new user (dns30) and a set of keys (which I copied into a local file for safekeeping):

Because IAM users have no permissions by default, the dns30 user can use its credentials to access Route 53, and nothing else. My policy is called DNS_ACCESS. It Allows access to all of the Route 53 APIs (route53:*) and all of my Route 53 resources (*).

I then created an account on DNS30 and entered my IAM user credentials (not my AWS Account credentials) when the site asked for them:

I was then able to create a Hosted Zone and populate it with the records needed to host a domain:

I could easily use IAM to create additional users for other applications. I could also create multiple users for the same application, with varied permissions.

To prove to myself that nothing funny was going on behind the scenes, I deleted the DNS_ACCESS policy like this:

C:\> iam-userdelpolicy -u dns30 -p DNS_ACCESS

Then I refreshed the page on DNS30 and received (as expected) an error message:

I also tried to use my dns30 credentials to access my S3 resources using S3Fox and was (as expected), unable to do so.

I hope that you enjoy this brief introduction to IAM (I've got a more detailed post in works) and that you can put it to use in interesting ways. What do you think?

Colin Percival (developer of Tarsnap) wrote to tell me that the FreeBSD operating system is now running on Amazon EC2 in experimental fashion.

According to his FreeBSD on EC2 blog post, version 9.0-CURRENT of FreeBSD is now available in the US East (Northern Virginia) region and can be run on t1.micro instances. Colin expects to be able to expand to other regions and EC2 instance types over time.

The AMI is stable enough to be able to build and run Apache under light load for several days. FreeBSD 9.0-CURRENT is a bleeding-edge snapshot release. Plans are in place to back-port the changes made to this release to FreeBSD 8.0-STABLE in the future.

Congratulations to Colin and to the rest of the FreeBSD team for making this happen. I have received a number of requests for this operating system over the years and I am happy to see that this community-driven effort has made so much progress.