In the application, Windows Azure Tables are the least prevalent aspect of storage and primarily address the cross-cutting concerns of logging/diagnostics and maintaining a history of application utilization. In the diagram below, the two tables – jobs and status – are highlighted, and the primary integrations points are shown. The status table in particular is accessed from just about every method in the roles running on Azure.

Windows Azure Table Storage Primer

Before I talk about the specific tables in use for the application, let’s review the core concepts of Windows Azure tables.

Size limitations

max of 100TB (this is the maximum for a Windows Azure Storage account, and can be subdivided into blobs, queues, and tables as needed). If 100TB isn’t enough, simply create an additional storage account!

unlimited number of tables per storage account

up to 252 user-defined properties per entity

maximum of 1MB per entity (referencing blob storage is the most common way to overcome this limitation).

up to 5,000 entities per second across a given table or tables in a storage account

up to 500 entities per second in a single table partition (where partition is defined by the PartitionKey value, read on!)

latency of about 100ms when access is via services in the same data center (a code-near scenario)

Structured but non-schematized. This means that tables have a structure, rows and columns if you will, but ‘rows’ are really entities and ‘columns’ are really properties. Properties are strongly typed, but there is no requirement that each entity in a table have the same properties, hence non-schematized. This is a far stretch from the relational world many of us are familiar with, and in fact, Windows Azure Table Storage is really an example of a key-value implementation of NoSQL. I often characterize it like an Excel spreadsheet: the row and column structure is clear, but there’s no rule that says column two has to include integers or row four has to have the same number of columns as row three.

Singly indexed. Every table in Windows Azure Storage must have three properties:

PartitionKey – a string value defining the partition to which the data is associated,

RowKey – a string value, which when combined with the PartitionKey, provides the one unique index for the table, and

The selection of partition key (and row key) are the most important decisions you can make regarding the scalability of your table. Data in each partition is serviced by a single processing node, so extensive concurrent reads and writes from the same partition can be a bottleneck for your application. For guidance, I recommend consulting the whitepaper on Windows Azure Tables authored by Jai Haridas, Niranjan Nilakantan, and Brad Calder. Jai also has a number of presentations on the subject from past Microsoft conferences that are available on-line.

In the Storage Client API there are four primary classes you’ll use. Three of these have analogs or extend classes in the WCF Data Services Client Library, which means you’ll have access to most of the goodness of OData, entity tracking, and constructing LINQ queries in your cloud applications that access Windows Azure Table Storage.

If you’ve built applications with WCF Data Services (over the Entity Framework targeting on-premises databases), you’re aware of the first-class development experience in Visual Studio: create a WCF Data Service, point it at your Entity Framework data model (EDM), and all your required classes are generated for you.

It doesn’t work quite that easily for you in Windows Azure Table Storage. There is no metadata document produced to facilitate tooling – and that actually makes sense. Since Azure tables have flexible schemas, how can you define what column 1 is versus column 2 when each row (entity) may differ?! To use WCF Data Services Client functionality you have to programmatically enforce a schema to create bit of order over the chaos. You could still have differently shaped entities in a single table, but you’ll have to manage how you select data from that table and ensure that it’s selected into a entity definition that matches it.

broker access between the data source (Windows Azure Table Storage) and the in-memory representation of the entities, tracking changes made so that the requisite commands can be formulated and dispatched to the data source when an update is requested

You will also typically use the TableServiceContext to help implement a Repository pattern to decouple you application code from the backend storage scheme, which is particularly helpful when testing.

In the Azure Image Processor, the TableAccessor class (through which the web and worker roles access storage) is essentially an Repository interface, and encapsulates a reference to the TableServiceContext. Since this application only has two tables, the context class itself is quite simple:

TableServiceEntity

What’s missing here? Well in the code snippet above, it’s the definition of StatusEntry, and of course, there’s the JobEntry class as well. Both of these extend the TableServiceEntity class, which predefines those three required properties of every Windows Azure Table: PartitionKey, RowKey, and Timestamp. Below is the definition for StatusEntry, and you can crack open the code to look at JobEntry.

Note that I’ve set the PartitionKey to be the requestId; that means all of the status entries for a given image processing job are within the same partition. In the Windows Forms client application, this data is queried by requestId, so the choice is logical, and the query will return quickly since it’s being handled by a single processing node associated with the given partition.

Where this choice could be a poor one though is if there are rapid fire inserts into the status table. Assume for instance that every line executed in the web and worker role code results in a status update. Since the table is partitioned by the job id, only one processing node can access it, and so a bottleneck may occur, and performance suffers. An alternative would be to partition based on a hash of say the tick count at which the status message was written, thus fanning out the handling of status messages to different processing nodes.

In this application, we don’t expect the status table to be a hot spot, so it’s not of primary concern, but I did want to underscore that how your data is used contextually may affect your choice of partitioning. In fact, it’s not unheard of to duplicate data in order to provide different indexing schemes for different uses of that data. Of course, in that scenario you bear the burden of keeping the data in sync to the degree necessary for the successful execution of the application.

For the RowKey, I’ve opted for a concatenation of the Ticks and a GUID. Why both? First of all, the combination of PartitionKey and RowKey must be unique, and there is a chance, albeit slim, that two different roles processing a given job will write a message at the exact same tick value. As a result, I brought in GUID to differentiate the two. The use of tick also enforces the default sort order, so that (more-or-less) the status entries appear in order. This will certainly be the case for entries written from a given role instance, but clock drift across instances could result in out-of-order events. For this application, exact order is not required, but if it is for you, you’ll need to consider an alternative synchronization mechanism, or better yet (in the world of the cloud) reconsider if that requirement is really a ‘requirement’.

The clientId is current a SID based on the execution of the Windows Forms client, but would be extensible to any token, such as an e-mail address that might be used in a OAuth type scenario. The RowKey is a concatenation of a Ticks value and the requestId (a GUID). Strictly speaking, the GUID value is enough to guarantee uniqueness – each job has a single entity (row) in the table - but I added the Ticks value to enforce a default sort order, so that when you select all the jobs of a given client they appear in chronological order versus GUID order (which would be non-deterministic).

The requested operation is not implemented on the specified resource.
RequestId:ddf8af86-521e-4c5e-b817-2b3a9c07007e
Time:2011-08-16T00:54:42.7881423Z

In my JobEntry class for instance, that’s why you’ll see the properties TileSize and Slices typed as Int32, whereas through the rest of the application they are of type Byte.
Curiously, I thought the same would be true of the Uri data, which I redefined to String explicitly, but on revisiting this, they seem to work. I’m assuming here there’s some explicit ToString going on to make it fly.

DataServiceQuery/CloudTableQuery

Now that we’ve got the structure of the data defined and the context to map our objects to the the underlying storage, let’s take a look at the query construction. In an excerpt above, you saw the following query:

That bit in the middle looks like a standard LINQ query to grab from the StatusEntries collection only those entities with a given RequestId, and that’s precisely what it is (and more specifically it’s a DataServiceQuery). In Windows Azure Table Storage though, a DataServiceQuery isn’t always sufficient.

When issuing a request to Windows Azure Table Storage (it’s all REST under the covers, remember), you will get at most 1000 entities returned in response. If there are more than 1000 entities fulfilling your query, you can get the next batch but it requires an explicit call along with a continuation token that is passed as part of the header and tells the Azure Storage engine where to pick up returning results. There are actually other instances where continuation tokens enter the picture even with less than 1000 entities, so it’s a best practice to always handle continuation tokens.

Execute runs the query and traverses all of the continuation tokens to return all of the results. This is a convenient method to use, since it handles the continuation tokens transparently, but it can be dangerous in that it will return all of the results requested from 1 to 1 million (or more)!

From the qry.Execute() line above, you can see I took the easy way out by letting CloudTableQuery grab everything in one fell swoop. In this context, that’s fine, because the number of status entries for a given job will be on the order of 10-50 versus thousands.

That’s pretty much it as far as the table access goes; next time we’ll cover the queues used by the Azure Photo Mosaic application.

This section provides information to help you decide whether to use the Generic Channel or the OData channel of SAP NetWeaver Gateway.

Use the Generic Channel if:

The interfaces for data provisioning on a backend system already match the requirements, for example, if the RFCs already exist. In this case, a complete adaptation on SAP NetWeaver Gateway is feasible, as the adaptation wraps remote calls to the backend and converts data between the RFC’s tables and the SAP NetWeaver Gateway API.

You do not wish to write any code. In this case you can use one of the content generators (BOR, RFC, or Screen Scraping).

Rapid prototyping is required.

Use the OData Channel if:

Required remote interfaces do not exist, that is, adequate RFCs for data provisioning need to be developed on the backend. In this case adaptation would occur on both sides of the RFC: on the backend to create an RFC and on the Gateway to wrap and map that RFC to the GW API.

The remote interface between both components creates strong dependencies for development, versioning, and deployment, and hence increases cost.

You wish to have code only in the backend.

The developer requires more flexibility, for example, they do not have to rely on existing interfaces based on RFC or web services, but can fetch data locally in the Business Suite system.

You want to leverage the lifecycle management benefits, because all objects created can reside in the same software component and follow existing paths.

I often get asked about how we are using Windows Azure internally and under NDA I can share some of the details – but its great to be able to point publicly at some of the excellent work that has been going on. And they are genuine technical case studies … hurrah! :-)

How Microsoft IT Deployed a Customer Facing Application to Windows Azure in Six Weeks
Learn how the Microsoft IT Volume Licensing team gained experience with Windows Azure by focusing on a straightforward, isolated customer-facing application that allowed them to architect and redeploy to Windows Azure in six weeks with immediate cost savings.
Technical Case Study

From:

To:

Architecting and Redeploying a Business Critical Application to Windows Azure
The Microsoft IT Volume Licensing team architected and redeployed a business critical application, with full security review and approval, to Windows Azure. The resulting solution delivers lower cost and improved scalability, performance, and reliability.IT Pro Webcast | Technical Case Study

The Visual Studio LightSwitch Team (@VSLightSwitch) described on 8/16/2011 how to “Build useful and user-friendly applications that look and function like professional software” that Wow Your End Users:

Take your application beyond users' expectations

Microsoft Visual Studio LightSwitch 2011 offers tools and options that help users create applications that rival off-the-shelf solutions by using both included features and downloadable extensions.

Quickly and easily create common screens

Use screen templates as a guide to lay out your content, providing a professional look and maintaining consistency among your forms.

Write the code that only you can write

Build your application to do exactly what you need it to do. LightSwitch lets you create custom business logic and rules unique to your business, which means that you can provide a tailored experience that best meets your users' needs.

Extend and customize your application

Take advantage of components, data sources, and services that add functionality. LightSwitch applications are built on a set of extensible templates, so it's easy to share components from one application to the next, or to expand your application’s capabilities using LightSwitch extensions.

Video: Wow your end users

David Mills of the System Center Team announced the final release version of the System Center Operations Manager (SCOM) Monitoring Pack for Windows Azure applications is available in a Hey! You! Get ON My Cloud! post of 8/15/2011:

The final release version of the System Center Monitoring Pack for Windows Azure applications is now available. This monitoring pack enables you to monitor the availability and performance of applications that are running on Windows Azure. Previously available as a release candidate (RC), this new Operations Manager monitoring pack enables an integrated view into Windows Azure based applications running in your public cloud environment.

After configuration, this monitoring pack enables you to:

Discover Windows Azure applications.

Provide status of each role instance.

Collect and monitor performance
information.

Collect and monitor Windows events.

Collect and monitor the .NET Framework
trace messages from each role instance.

Software licensing is way more complicated than it needs to be, and moving to the cloud, especially using your existing apps, offers a whole new wrinkle.

Microsoft, which is "all in the cloud," wants its customers equally in, and is tweaking its Software Assurance volume licensing program to ease the transition. The basic idea is through "license mobility" you can use what you already paid for to run on your servers and move that software to the cloud.

Analyst firm Directions on Microsoft analyzes license mobility, and their analyst John Cullen spoke to Microsoft watcher and Redmond magazine columnist Mary Jo Foley about all the gory details. Licensing comes easy to Cullen, who for half a decade crafted volume programs in Redmond.

According to Cullen, mobility is an attempt to lure IT to the cloud, but also a lifeline for Software Assurance, which could end up irrelevant as computing shifts off site.

The Microsoft side of the equation is not the most complicated part. The tricky area is continuing to pay Microsoft fees while at the same time negotiating new fees with a hosting company. It is unclear whether, in the final analysis, you'll save or lose money on this deal.

While I may have mentioned the Microsoft side is a bit less hairy than with hosters, it ain't exactly second grade math. Here's an example from Cullen:

"A scenario where you 'win' (licenses let you do more in the cloud than on-premises): We're running one SQL Server workload on a dual proc on-premises server licensed with two SQL Enterprise proc licenses. You can move the workload up to a multitenant hoster with a quad proc box, at times using more proc 'horsepower' than you did when on premises, and yet you only need to allocate ONE of your two SQL Enterprise proc licenses to do so."

Not exactly nuclear science, but not simple either, especially when you have multiple servers, multiple apps and myriad VMs to match. Break out your HP EasyCalc 300 to figure all that out!

Beginning October 1, 2011, we will make two billing related updates to the Windows Azure Platform to increase flexibility and simplicity for our customers.

First, the price of extra small compute will be reduced by 20 percent. Additionally, the compute allocations for all of our offers will be simplified to small compute hours. To deliver additional flexibility to our customers, these hours can also be used for extra small compute at a ratio of 3 extra small compute hours per 1 hour of small compute. Customers can also utilize these hours for other compute sizes at the standard prescribed ratios noted in their rate plan. Additionally, current Introductory Special offer customers and customers who sign up for this offer prior to October 1 will receive both 750 extra small compute hours and 750 small compute hours for the months of August and September to ensure maximum value and flexibility in advance of this enhanced offer.

Details on compute allotment by offer can be found below:

Prior to October 1

Beginning on October 1

Offer

Extra Small

Small

Extra Small

Small

Extra Small Equivalent

Introductory Special*

750

750

-

750

2,250

Cloud Essentials

750

25

-

375

1,125

MSDN Professional

750

-

-

375

1,125

MSDN Premium

1,500

-

-

750

2,250

MSDN Ultimate

-

1,500

-

1,500

4,500

*Note: On August 1, we increased the number of small hours included in this offer from 25 to 750. For the months of August and September, Introductory Special users will get both 750 extra small compute hours and 750 small compute hours. Once small hours and extra-small hours are swappable beginning on October 1, Introductory Special will only include 750 small hours.

We are also simplifying our data transfer meters to utilize only two zones, “Zone 1” and “Zone 2”. The zone meter system will simplify the current meter system that includes multiple regions and separate meters for both standard data transfers and CDN. Data centers in Europe and North America will be reported and charged under Zone 1 and those for the rest of the world will be classified as Zone 2. This change will ease customer’s ability to monitor data transfers and understand billing charges. The price per GB for outbound data transfers will not change. Customers will also gain the flexibility to utilize CDN data transfers against any data transfer amounts included with their offer. For billing periods that overlap September and October, customers will see both the current regional and new Zone 1 and 2 meters on their invoice

These changes are part of our ongoing commitment to deliver world class services in a simple and flexible way to customers.

This information was delivered to Windows Azure Platform subscribers by email on 8/15/2011.

The new allocation for the Cloud Essentials offer still won’t let me run a high-availability application, which requires two instances (~1,500 hours/month.)

While the hype rages around cloud computing, most cloud implementations go the way of the private cloud and avoid the public clouds for now. Private clouds are exactly what they sound like. Your own instance of SaaS, PaaS, or IaaS that exists in your own data center, all tucked away, protected and cozy. You own the hardware, you can hug your server.

However, what defines a private cloud these days could also mean systems that are remotely hosted but dedicated to a single enterprise, and, in some cases, provided out of a public cloud data center as a virtual private cloud. Thus any cloud infrastructure that's dedicated to a single organization is getting the "private cloud" label. This includes the emerging relabeling of existing enterprise software and hardware solutions, looking to deliver cloud-in-a-box private clouds.

If this sounds confusing, it is. The technology vendors and the hype clearly load up the term "private cloud" with everything and anything. However, the concept of private cloud computing has the potential to bring a huge amount of value to enterprise IT. That is, if we understand the right approach, and how to leverage the right technology to create the building blocks of the private cloud.

Why Go Private?Most enterprises are eager to leverage cloud computing, but not so eager to place core business processing and critical business data on public clouds. Indeed, there may even be legal restrictions on where data may exist, as we have seen in the financial and health verticals, where some types of data may not exist outside of the enterprise. Or, the risk of compromised or lost data outweighs the value that public cloud computing will bring.

While the regulations are real, most of those who select private over public cloud computing do so around control issues. Many in enterprise IT don't like to give up control of core business systems since that is where they may place their own value. If these systems are controlled and managed by others outside of the enterprise, they feel their value will be diminished. In most cases these are false perceptions.

Security is another reason to go private cloud. Public clouds provide rudimentary security subsystems that have thus far had a good track record. However, most enterprises do not consider public clouds as secure as systems that exist on site or as those remotely hosted but completely under the enterprise's control. While public cloud security is getting better, private clouds do offer fewer security risks.

Finally, there are performance issues with public clouds that include the natural latency of leveraging the Internet. This is a matter of how the applications and systems are designed more than limitations of the clouds, but in some instances these are valid concerns in problem domains with a high amount of data transfer between the data server and the consumer.

First you'll notice that virtualization is not on the list despite the fact that those who leverage virtualization often call clusters of virtualized servers a private cloud. The reality is that virtualization is often used when building a private cloud, and it is described below as a building block. But simple virtualization does not a private cloud make, and you choose to leverage it or not. For example, Google's cloud systems do not leverage virtualization but Amazon's AWS does. …

David continues with definitions of the items in the preceding list and describes “Building Blocks of Private Cloud.” He concludes:

Best PracticesWhile private clouds are still very new in our world, some best practices are beginning to emerge around how to define, design, and implement a private cloud.

The first best practice is to focus on the requirements before you begin your journey to a private cloud solution. Many tasked to deploy private clouds often skip the requirements, and thus take a shot in the dark around the best architecture and technology requirements, and thus they often miss the mark. As a rule, make sure to move from the requirements, to the architecture, and then to the solution. While the lure of a private cloud-in-a-box is sometimes too difficult to resist, most solutions require a bit more complex planning process to deliver the value.

Also recommended is the use of service oriented architecture (SOA) approaches around the definition and architecture of private clouds. Many find that the use of SOA concepts, which can deliver solutions as sets of services that can be configured into solutions, is a perfect match for those who design, build, and deploy private clouds.

The second best practice is to define the business value of the private cloud before the project begins. There should be a direct business benefit that is gained from this technology. Many private cloud deployments will cost many millions of dollars, and will thus draw questions from management. You need to be prepared to provide solid answers as to the ROI.

The final best practice is to work in small increments. While it may seem a good idea to fill half the data center with your new private cloud ... you'll need the capacity at some point right? Not now. You should only create private cloud instances with the capacity requirements for the next year. If you've designed your private cloud right, and have leveraged the right vendors, increasing capacity should be as easy as adding additional servers as needed.

In Your Future?Private clouds are really a direct copy of the efficiency of public cloud computing architectures, repurposed for internal use within enterprises. The benefits are somewhat different, as is the technology, architecture, and the way private clouds are deployed. In many respects private clouds are just another internal system, but it's the patterns of use where the value of private clouds really shines through, including access to shared resources that can be allocated on-demand.

Challenges that exist include the confusion around the term "private cloud," which is overused simply as way to push an existing software or hardware product as something that's now "a cloud," and thus relevant and cool. This cloud washing has been going on for some time with everything from disk drives, printers, and scanners being positioned within the emerging space of the private cloud as "clouds."

The only way to counter this confusion is to stick to our guns in terms of what a private cloud is, including its attributes and building blocks as discussed in this article. Without a clear understanding of the concept of a private cloud, and the best practices and approaches to build a private cloud, it won't provide the value we expect.

IT must deal with an increasing number of regulations, many of which come with stiff legal and financial penalties for noncompliance. As cloud computing comes on the scene, it's no wonder that many in IT push back on its use, which in many instances forces you to give up direct control of systems that have to be maintained with these regulations in mind. As one client put it, "Why would I let somebody who does not work here get me arrested?"

But there's another, better way to think about this issue. There is no legal reason why the systems that have to maintain compliance can't exist in the cloud. In fact, it could be better to have some of those systems in the cloud. Unfortunately, many in IT don't see the possibility because of nightmares about a cloud provider's mistake leading to big trouble.

The trouble with regulations is that they constantly change, and thus need to be managed as if they were a consistently shifting set of users and/or business requirements. This affects how security subsystems function and how information is tracked around the interpretation of government or legal mandates. Therefore, many hundreds of IT shops figure out ways to maintain compliance, perhaps not all resulting in the same solutions -- and that means mistakes, inconsistencies, and wasted effort.

That's where cloud computing provides an opportunity. In many instances, the ability to comply with existing regulations or keep up with changing regulations can be outsourced to a cloud computing provider that can solve these problems for all subscribers. For example, a provider could offer a type of encryption that's now a government mandate or log transactions in specific ways to meet the letter of the law.

It's much cheaper and perhaps safer to use cloud providers for many of the services required of you to maintain compliance in your industry. Such centrally managed compliance based on the same rules is more effective and efficient. There are already some examples of this cloud-based compliance today, such as in industry-specific cloud services for health care, finance, and government.

More of these features should be provided from cloud services precisely because they can be centrally managed. That means better consistency and assurance about your actual compliance -- and less work to get there.

Calling all race fans: Don’t miss the live chat, “Andretti Live & Global” on Ortsbo, powered by Windows Azure, with three generations of the legendary Andretti racing family – Mario, Michael and Marco – Saturday, August 27, 2011 at 9:00 am PT. During the live chat, fueled by Ortsbo’s Live & Global platform, the Andretti racing family will chat live with racing fans and media around the world in up to 53 languages.

A division of Intertainment Media, Ortsbo allows users around the world to communicate with family, friends and colleagues by enabling them to break down language and cultural barriers through an easy to use, language-centric interface.

Ortsbo’s new HTML alpha platform offers users the ability to connect to Facebook, MSN and Google Talk; other social networks will be added quickly. The HTML version of Ortsbo, combined with the power of Windows Azure, allows both commercial and consumer use of Ortsbo without the need to download a plug-in, facilitating real time translated chat for virtually all worldwide browser-enabled devices.

Presented by Andretti Autosport, together with INDYCAR and Ortsbo, the chat will be broadcast live to fans around the world from Infineon Raceway in Sonoma, Calif., which was the site of Marco Andretti's first win.

In September we will start to deliver monthly workshops on the Windows Azure Platform to help Microsoft partners who are developing software products and services and would like to explore the relevance and opportunities presented by the Windows Azure Platform for Cloud Computing.

Overview:

The workshops are designed to help partners such as yourself understand what the Windows Azure Platform is, how it is being used today, what resources are available and to drill into the individual technologies such as SQL Azure and Windows Azure. The intention is to ensure you leave with a good understanding of if, why, when and how you would take advantage of this exciting technology plus answers to all (or at least most!) of your questions.

Who should attend:

These workshops are aimed at technical decision makers including CTOs, Technical Directors, senior architects and developers. Attendees should be from companies who create software products or services used by other organisations. For example Independent Software Vendors.

There are a maximum of 12 spaces per workshop and one space per partner.

Format:

This format is designed to encourage discussion and feedback and ensure you get any questions you have about the Windows Azure platform answered. There will be the opportunity for more detailed one to one conversations over lunch and into the afternoon.

Topics covered will include:

Understanding Microsoft’s Cloud Computing Strategy

Just what is the Windows Azure platform?

Exploring why software product authors should be in interested in the Windows Azure Platform

Understanding the Windows Azure Platform Pricing Model

How partners are using the Windows Azure platform today

Getting started building solutions that utilise the Windows Azure Platform

Registration is 9:30 for a 10am start. There will be lunch at around 1pm after which the formal part of the workshop will finish. The good news is the Microsoft team will remain to continue the discussion in a more informal format.

The new features include a new AWS Explorer, support for multiple AWS accounts and identities, new editors for Amazon S3, Amazon SNS, and Amazon SQS, a new SimpleDB query editor, remote debugging for Elastic Beanstalk environments, and support for creating connections to databases hosted on Amazon RDS.

Here's a tour. The AWS Explorer displays all of your AWS resources in a single hierarchy:

You can expand any service node to see what's inside:

You can now add multiple AWS accounts or IAM user credentials, and you can easily activate any one of them as needed. You can now manage development, test, staging, and production services within a single session, using IAM users to control access to each:

You can now view and edit the contents of any of your S3 buckets:

You can also take a look at any of your SQS queues:

You can also view your SNS topics and subscriptions:

You can query any of your SimpleDB domains:

You can debug an Elastic Beanstalk application from within Eclipse. The toolkit will even automatically open up the proper remote debugging port for you:

You can also connect to RDS Database Instances:

The newest AWS Toolkit for Eclipse can be downloaded here. We've also put together a brand new version of the Getting Started Guide.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.