Microsoft VP Scott Guthrie announced a range of updates to Windows Azure that fill in platform gaps while leapfrogging market leader AWS in one particular area. The new database export service provides a much-needed backup capability – albeit with controversial pricing – and the updated Traffic Manager delivers a cross-region load balancing experience that appears superior to what AWS offers.

One commonly requested feature we’ve heard has been the ability for customers to perform recurring, fully automated, exports of a SQL Database to a Storage account. Starting today this is now a built-in feature of Windows Azure. You can now export transactional-consistent copies of your SQL Databases, in an automated recurring way, to a .bacpac file in a Storage account using any schedule you wish to define.

This new export service can be scheduled to run as frequently as once a day, and users can specify how many days to keep each export. In addition, new databases can be created from any of the exported copies.

Windows Azure SQL Databases are managed databases with settings for scaling and maintaining SQL Server databases. However, this service has been missing sophisticated administrative capabilities such as automated backup and restore. While there have been manual ways to copy databases and reference material for fault tolerance planning, this new functionality gives Windows Azure customers some relief when defining disaster plans. However, the design of this solution comes at a cost, literally. Guthrie explained the architecture and impact on the user’s bill.

When an automated export is performed, Windows Azure will first do a full copy of your database to a temporary database prior to creating the .bacpac file. This is the only way to ensure that your export is transactionally consistent (this database copy is then automatically removed once the export has completed). As a result, you will be charged for this database copy on the day that you run the export. Since databases are charged by the day, if you were to export every day, you could in theory double your database costs. If you run every week then it would be much less.

On top of paying for this temporary database that facilitates the backup process, customers also pay storage costs for the backup and network bandwidth charges (if the database resides in a different region than the Windows Azure Storage account). Commenters in Guthrie’s own post expressed displeasure at the high costs associated with this feature, calling it “disappointing” and “unreasonable.” This pricing model for this service differs from that offered by the Amazon Relational Database Service (RDS). Amazon RDS – also a managed database platform – provides significantly more backup and restore functionality at a much lower price. An RDS database has backups enabled by default and supports point-in-time restore. AWS doesn’t charge for the backup service itself, and doesn’t charge for storage if the backup is less than 100% of the size of the provisioned database.

Microsoft also revealed a new “premium” tier of Windows Azure SQL Databases that use reserved capacity to provide predictable performance for cloud applications. Guthrie pointed out three key reasons that customers would purchase this tier.

High Peak Load – An application that requires a lot of CPU, Memory, or IO to complete its operations. For example, if a database operation is known to consume several CPU cores for an extended period of time, it is a candidate for using a Premium database.

Many Concurrent Requests – Some database applications service many concurrent requests. The normal Web and Business Editions in SQL Database have a limit of 180 concurrent requests. Applications requiring more connections should use a Premium database with an appropriate reservation size to handle the maximum number of needed requests.

Predictable Latency – Some applications need to guarantee a response from the database in minimal time. If a given stored procedure is called as part of a broader customer operation, there might be a requirement to return from that call in no more than 20 milliseconds 99% of the time. This kind of application will benefit from a Premium database to make sure that dedicated computing power is available.

This service, available now in a preview mode at a discounted price, offers two database reservation sizes. The P1 size provides 1 CPU core, 150 IOPS, 8 GB of memory, and up to 2000 active sessions. The larger P2 size delivers 2 CPU cores, 300 IOPS, 16 GB of memory, and up to 4000 active sessions. Microsoft has shipped whitepapers helping customers select and manage a Premium database. In comparison, a SQL Server database in Amazon RDS can have up to 10,000 “provisioned IOPS” for databases that need consistent performance.

One area where Microsoft is establishing a strong position is in enabling geo-distributed, highly available web applications. The Windows Azure Traffic Manager – which has been around for nearly two years, but is just now available in the Windows Azure Portal – offers a sophisticated load balancer that has more functionality than AWS Elastic Load Balancing (ELB). Unlike the ELB which only supports Round Robin load balancing to availability zones within a single region, the Windows Azure Traffic Manager lets users direct traffic to cloud services in any Windows Azure region. Microsoft also supports three unique routing methods for a given Traffic Manager profile: Performance, Failover, and Round Robin. The Performance method routes traffic to the closest geographic location hosting the application. The Failover method routes traffic to a single data center while using another for backup, and the Round Robin method distributes load evenly across all the included cloud services. To be fair, AWS offers an aggregate of this capability through a combination of the ELB and Amazon Route 53 (DNS) service, but Microsoft has simplified it and is giving developers a single, easy-to-access location for configuring cloud services and the policies for routing traffic to them.

Finally, Microsoft made a pair of improvements to its recently announced Auto Scale service. First, Windows Azure now supports Auto Scaling for its mobile-backend-as-a-service. Guthrie explained how Mobile Services scales up based on need, and resets itself every morning.

When this feature is enabled, Windows Azure will periodically check the daily number of API calls to and from your Mobile Service and will scale up by an additional unit if you are above 90% of your API quota (until reaching the set maximum number of instances you wish to enable).

At the beginning of each day (UTC), Windows Azure will then scale back down to the configured minimum. This enables you to minimize the number of Mobile Service instances you run – and save money.

The Windows Azure Auto Scale service also supports rules tied to the depth of Windows Azure Service Bus Queues. If a queue gets too full – which may indicate that there are too few backend servers online to process the message backlog – Auto Scale can spin up new Virtual Machines or Cloud Services to handle the workload.

Amazon SimpleDB can be useful for those who need a non-relational database for storage of smaller, non-structural data on the other hand Amazon DynamoDB can be useful for those who need a fast, highly scalable non-relational database. Amazon SimpleDB offers simplicity and flexibility whereas Amazon DynamoDB offers good performance and incremental scalability. DynamoDB is priced according to how much request capacity you have requested. Amazon SimpleDB is cheap than DynamoDB.

SDBExplorer has been design for fast and parallel operations. You can experience fast, multiple and parallel operations using SDB Explorer.