We’re excited to announce that, as part of version 2012-02-12, we have introduced Table Shared Access Signatures (SAS), Queue SAS and updates to Blob SAS. In this blog, we will highlight usage scenarios for these new features along with sample code using the Windows Azure Storage Client Library v1.7.1, which is available on GitHub.

Shared Access Signatures allow granular access to tables, queues, blob containers, and blobs. A SAS token can be configured to provide specific access rights, such as read, write, update, delete, etc. to a specific table, key range within a table, queue, blob, or blob container; for a specified time period or without any limit. The SAS token appears as part of the resource’s URI as a series of query parameters. Prior to version 2012-02-12, Shared Access Signature could only grant access to blobs and blob containers.

SAS Update to Blob in version 2012-02-12

In the 2012-02-12 version, Blob SAS has been extended to allow unbounded access time to a blob resource instead of the previously limited one hour expiry time for non-revocable SAS tokens. To make use of this additional feature, the sv (signed version) query parameter must be set to "2012-02-12" which would allow the difference between se (signed expiry, which is mandatory) and st (signed start, which is optional) to be larger than one hour. For more details, refer to the MSDN documentation.

Best Practices When Using SAS

The following are best practices to follow when using Shared Access Signatures.

Always use HTTPS when making SAS requests. SAS tokens are sent over the wire as part of a URL, and can potentially be leaked if HTTP is used. A leaked SAS token grants access until it either expires or is revoked.

Use server stored access policies for revokable SAS. Each container, table, and queue can now have up to five server stored access policies at once. Revoking one of these policies invalidates all SAS tokens issued using that policy. Consider grouping SAS tokens such that logically related tokens share the same server stored access policy. Avoid inadvertently reusing revoked access policy identifiers by including a unique string in them, such as the date and time the policy was created.

Don’t specify a start time or allow at least five minutes for clock skew. Due to clock skew, a SAS token might start or expire earlier or later than expected. If you do not specify a start time, then the start time is considered to be now, and you do not have to worry about clock skew for the start time.

Limit the lifetime of SAS tokens and treat it as a Lease. Clients that need more time can request an updated SAS token.

Be aware of version: Starting 2012-02-12 version, SAS tokens will contain a new version parameter (sv). sv defines how the various parameters in the SAS token must be interpreted and the version of the REST API to use to execute the operation. This implies that services that are responsible for providing SAS tokens to client applications for the version of the REST protocol that they understand. Make sure clients understand the REST protocol version specified by sv when they are given a SAS to use.

Table SAS

SAS for table allows account owners to grant SAS token access by defining the following restriction on the SAS policy:

1. Table granularity: users can grant access to an entire table (tn) or to a table range defined by a table (tn) along with a partition key range (startpk/endpk) and row key range (startrk/endrk).

To better understand the range to which access is granted, let us take an example data set:

The permission is specified as range of rows from (starpk,startrk) until (endpk, endrk).

3. Time range (st/se): users can limit the SAS token access time. Start time (st) is optional but Expiry time (se) is mandatory, and no limits are enforced on these parameters. Therefore a SAS token may be valid for a very large time period.

4. Server stored access policy (si): users can either generate offline SAS tokens where the policy permissions described above is part of the SAS token, or they can choose to store specific policy settings associated with a table. These policy settings are limited to the time range (start time and end time) and the access permissions. Stored access policy provides additional control over generated SAS tokens where policy settings could be changed at any time without the need to re-issue a new token. In addition, revoking SAS access would become possible without the need to change the account’s key.

For more information on the different policy settings for Table SAS and the REST interface, please refer to the SAS MSDN documentation.

Though non-revocable Table SAS provides large time period access to a resource, we highly recommend that you always limit its validity to a minimum required amount of time in case the SAS token is leaked or the holder of the token is no longer trusted. In that case, the only way to revoke access is to rotate the account’s key that was used to generate the SAS, which would also revoke any other SAS tokens that were already issued and are currently in use. In cases where large time period access is needed, we recommend that you use a server stored access policy as described above.

Most Shared Access Signature usage falls into two different scenarios:

A service granting access to clients, so those clients can access their parts of the storage account or access the storage account with restricted permissions. Example: a Windows Phone app for a service running on Windows Azure. A SAS token would be distributed to clients (the Windows Phone app) so it can have direct access to storage.

A service owner who needs to keep his production storage account credentials confined within a limited set of machines or Windows Azure roles which act as a key management system. In this case, a SAS token will be issued on an as-needed basis to worker or web roles that require access to specific storage resources. This allows services to reduce the risk of getting their keys compromised.

Along with the different usage scenarios, SAS token generation usually follows the models below:

A SAS Token Generator or producer service responsible for issuing SAS tokens to applications, referred to as SAS consumers. The SAS token generated is usually for limited amount of time to control access. This model usually works best with the first scenario described earlier where a phone app (SAS consumer) would request access to a certain resource by contacting a SAS generator service running in the cloud. Before the SAS token expires, the consumer would again contact the service for a renewed SAS. The service can refuse to produce any further tokens to certain applications or users, for example in the scenario where a user’s subscription to the service has expired. Diagram 1 illustrates this model.

Diagram 1: SAS Consumer/Producer Request Flow

The communication channel between the application (SAS consumer) and SAS Token Generator could be service specific where the service would authenticate the application/user (for example, using OAuth authentication mechanism) before issuing or renewing the SAS token. We highly recommend that the communication be a secure one in order to avoid any SAS token leak. Note that steps 1 and 2 would only be needed whenever the SAS token approaches its expiry time or the application is requesting access to a different resource. A SAS token can be used as long as it is valid which means multiple requests could be issued (steps 3 and 4) before consulting back with the SAS Token Generator service.

A one-time generated SAS token tied to a signed identifier controlled as part of a stored access policy. This model would work best in the second scenario described earlier where the SAS token could either be part of a worker role configuration file, or issued once by a SAS token generator/producer service where maximum access time could be provided. In case access needs to be revoked or permission and/or duration changed, the account owner can use the Set Table ACL API to modify the stored policy associated with issued SAS token. …

With the preview of Windows Azure Virtual Machines, we have two new special types of blobs stored in Windows Azure Storage: Windows Azure Virtual Machine Disks and Window Azure Virtual Machine Images. And of course we also have the existing preview of Windows Azure Drives. In the rest of this post, we will refer to these as storage, disks, images, and drives. This post explores what drives, disks, and images are and how they interact with storage.

Virtual Hard Drives (VHDs)

Drives, disks, and images are all VHDs stored as page blobs within your storage account. There are actually several slightly different VHD formats: fixed, dynamic, and differencing. Currently, Windows Azure only supports the format named ‘fixed’. This format lays the logical disk out linearly within the file format, such that disk offset X is stored at blob offset X. At the end of the blob, there is a small footer that describes the properties of the VHD. All of this stored in the page blob adheres to the standard VHD format, so you can take this VHD and mount it on your server on-premises if you choose to. Often, the fixed format wastes space because most disks have large unused ranges in them. However, we store our ‘fixed’ VHDs as a page blob, which is a sparse format, so we get the benefits of both the ‘fixed’ and ‘expandable’ disks at the same time.

Uploading VHDs to Windows Azure Storage

You can upload your VHD into your storage account to use it for either PaaS or IaaS. When you are uploading your VHD into storage, you will want to use a tool that understands that page blobs are sparse, and only uploads the portions of the VHD that have actual data in them. Also, if you have dynamic VHDs, you want to use a tool that will convert your dynamic VHD into a fixed VHD as it is doing the upload. CSUpload will do both of these things for you, and it is included as part of the Windows Azure SDK.

Persistence and Durability

Since drives, disks, and images are all stored in storage, your data will be persisted even when your virtual machine has to be moved to another physical machine. This means your data gets to take advantage of the durability offered by the Windows Azure Storage architecture, where all of your non-buffered and flushed writes to the disk/drive are replicated 3 times in storage to make it durable before returning success back to your application.

Drives (PaaS)

Drives are used by the PaaS roles (Worker Role, Web Role, and VM Role) to mount a VHD and assign a drive letter. There are many details about how you use these drives here. Drives are implemented with a kernel mode driver that runs within your VM, so your disk IO to and from the drive in the VM will cause network IO to and from the VM to your page blob in Windows Azure Storage. The follow diagram shows the driver running inside the VM, communicating with storage through the VM’s virtual network adapter.

PaaS roles are allowed to mount up to 16 drives per role.

Disks (IaaS)

When you create a Windows Azure Virtual Machine, the platform will attach at least one disk to the VM for your operating system disk. This disk will also be a VHD stored as a page blob in storage. As you write to the disk in the VM, the changes to the disk will be made to the page blob inside storage. You can also attach additional disks to your VM as data disks, and these will be stored in storage as page blobs as well.

Unlike for drives, the code that communicates with storage on behalf of your disk is not within your VM, so doing IO to the disk will not cause network activity in the VM, although it will cause network activity on the physical node. The following diagram shows how the driver runs in the host operating system, and the VM communicates through the disk interface to the driver, which then communicates through the host network adapter to storage.

There are limits to the number of disks a virtual machine can mount, varying from 16 data disks for an extra-large virtual machine, to one data disk for an extra small virtual machine. Details can be found here.

IMPORTANT: The Windows Azure platform holds an infinite lease on all the page blobs that it considers disks in your storage account so that you don’t accidently delete the underlying page blob, container, nor the storage account while the VM is using the VHD. If you want to delete the underlying page blob, the container it is within, or the storage account, you will need to detach the disk from the VM first as shown here:

And then select the disk you want to detach and then delete:

Then you need to remove the disk from the portal:

and then you can select ‘delete disk’ from the bottom of the window:

Note: when you delete the disk you are not deleting the disk (VHD page blob) in your storage account. You are only disassociating it from the images that can be used for Windows Azure Virtual Machines. After you have done all of the above, you will be able to delete the disk from your storage account, using Windows Azure Storage REST APIs or storage explorers.

Images (IaaS)

Windows Azure uses the concept of an “Image” to describe a template VHD that can be used to create one or more Virtual Machines. Windows Azure and some partners provide images that can be used to create Virtual Machines. You can also create images for yourself by capturing an image of an existing Windows Azure Virtual Machine, or you can upload a sysprep’d image to your storage account. An image is also in the VHD format, but the platform will not write to the image. Instead, when you create a Virtual Machine from an image, the system will create a copy of that image’s page blob in your storage account, and that copy will be used for the Virtual Machine’s operating system disk.

IMPORTANT: Windows Azure holds an infinite lease on all the page blobs, the blob container and the storage account that it considers images in your storage account. Therefore, to delete the underlying page blob, you need to delete the image from the portal by going to the “Virtual Machines” section, clicking on “Images”:

Then you select your image and press “Delete Image” at the bottom of the screen. This will disassociate the VHD from your set of registered images, but it does not delete the page blob from your storage account. At that point, you will be able to delete the image from your storage account.

Temporary Disk

There is another disk present in all web roles, worker roles, VM Roles, and Windows Azure Virtual Machines, called the temporary disk. This is a physical disk on the node that can be used for scratch space. Data on this disk will be lost when the VM is moved to another physical machine, which can happen during upgrades, patches, and when Windows Azure detects something is wrong with the node you are running on. The sizes offered for the temporary disk are defined here.

The temporary disk is the ideal place to store your operating system’s pagefile.

IMPORTANT: The temporary disk is not persistent. You should only write data onto this disk that you are willing to lose at any time.

Billing

Bandwidth

We recommend mounting drives from within the same location (e.g., US East) as the storage account they are stored in, as this offers the best performance, and also will not incur bandwidth charges. With disks, you are required to use them within the same location the disk is stored.

Transactions

When connected to a VM, disk IOs from both drives and disks will be satisfied from storage (unless one of the layers of cache described below can satisfy the request first). Small disk IOs will incur one Windows Azure Storage transaction per IO. Larger disk IOs will be split into smaller IOs, so they will incur more transaction charges. The breakdown for this is:

Drives

IO < 2 megabytes will be 1 transaction

IO >= 2 megabytes will be broken into transactions of 2MBs or smaller

Disks

IO < 128 kilobytes will be 1 transaction

IO >= 128 kilobytes will be broken into transactions of 128KBs or smaller

In addition, operating systems often perform a little read-ahead for small sequential IOs (typically less than 64 kilobytes), which may result in larger sized IOs to drives/disks than the IO size being issued by the application. If the prefetched data is used, then this can result in fewer transactions to your storage account than the number of IOs issued by your application.

Storage Capacity

Windows Azure Storage stores pages blobs and thus VHDs in sparse format, and therefore only charges for data within the VHD that has actually been written to during the life of the VHD. Therefore, we recommend using ‘quick format’ because this will avoid storing large ranges of zeros within the page blob. When creating a VHD you can choose the quick format option by specifying the below:

It is also important to note that when you delete files within the file system used by the VHD, most operating systems do not clear or zero these ranges, so you can still be paying capacity charges within a blob for the data that you deleted via a disk/drive.

Caches, Caches, and more Caches

Drives and disks both support on-disk caching and some limited in-memory caching. Many layers of the operating system as well as application libraries do in-memory caching as well. This section highlights some of the caching choices you have as an application developer.

Caching can be used to improve performance, as well as to reduce transaction costs. The following table outlines some of the caches that are available for use with disks and drives. Each is described in more detail below the table.

FileStream (applies to both disks and drives)

.NET framework’s FileStream class will cache reads and writes in memory to reduce IOs to the disk. Some of the FileStream constructors take a cache size, and others will choose the default 8k cache size for you. You can not specify that the class use no memory cache, as the minimum cache size is 8 bytes. You can force the buffer to be written to disk by calling the FileStream.Flush(bool) API.

Operating System Caching (applies to both disks and drives)

The operating system itself will do in-memory buffering for both reads and writes, unless you explicitly turn it off when you open a file using FILE_FLAG_WRITE_THROUGH and/or FILE_FLAG_NO_BUFFERING. An in-depth discussion of the in memory caching behavior of windows is available here.

Windows Azure Drive Caches

Drives allow you to choose whether to use the node’s local temporary disk as a read cache, or to use no cache at all. The space for a drive’s cache is allocated from your web role or worker role’s temporary disk. This cache is write-through, so writes are always committed immediately to storage. Reads will be satisfied either from the local disk, or from storage.

Using the drive local cache can improve sequential IO read performance when the reads ‘hit’ the cache. Sequential reads will hit the cache if:

The data has been read before. The data is cached on the first time it is read, not on first write.

The cache is large enough to hold all of the data.

Access to the blob can often deliver a higher rate of random IOs than the local disk. However, these random IOs will incur storage transaction costs. To reduce the number of transactions to storage, you can use the local disk cache for random IOs as well. For best results, ensure that your random writes to the disk are 8KB aligned, and the IO sizes are in multiples of 8KB.

Windows Azure Virtual Machine Disk Caches

When deploying a Virtual Machine, the OS disk has two host caching choices:

Read/Write (Default) – write back cache

Read - write through cache

When you setup a data disk on a virtual machine, you get three host caching choices:

The read cache is stored both on disk and in memory in the host OS. The write cache is stored in memory in the host OS.

WARNING: If your application does not use FILE_FLAG_WRITE_THROUGH, the write cache could result in data loss because the data could be sitting in the host OS memory waiting to be written when the physical machine crashes unexpectedly.

Using the read cache will improve sequential IO read performance when the reads ‘hit’ the cache. Sequential reads will hit the cache if:

The data has been read before.

The cache is large enough to hold all of the data.

The cache’s size for a disk varies based on instance size and the number of disks mounted. Caching can only be enabled for up to four data disks.

No Caching for Windows Azure Drives and VM Disks

Windows Azure Storage can provide a higher rate of random IOs than the local disk on your node that is used for caching. If your application needs to do lots of random IOs, and throughput is important to you, then you may want to consider not using the above caches. Keep in mind, however, that IOs to Windows Azure Storage do incur transaction costs, while IOs to the local cache do not.

To disable your Windows Azure Drive cache, pass ‘0’ for the cache size when you call the Mount() API.

For a Virtual Machine data disk the default behavior is to not use the cache. If you have enabled the cache on a data disk, you can disable it using the Update Data Disk service management API, or the Set-AzureDataDisk powershell command.

For a Virtual Machine operating system disk the default behavior is to use the cache. If your application will do lots of random IOs to data files, you may want to consider moving those files to a data disk which has the caching turned off.

Looking for your first cloud computing project? Chances are you're considering a very small, very low-risk application to create on a public PaaS or IaaS cloud provider.

I get the logic: It's a low-value application. If the thing tanks or your information is hacked, no harm, no foul. However, I assert that you could move backward by hedging your bets, retrenching further and further into the data center and missing out on the game-changing advantages of the cloud.

You need to bite the bullet, update that résumé (in case your superiors don't agree), and push your strategic corporate data to the public cloud.

Using the public cloud lets you leverage this data in new ways, thanks to new tools -- without having to pay millions of dollars for new infrastructure to support the database processing. When you have such inexpensive capacity, you'll figure out new ways to analyze your business using this data, and that will lead to improved decisions and -- call me crazy -- a much better business. Isn't that the objective?

Of course, the downside is that your data could be carted off by the feds in a data center raid or hacked through an opening your cloud provider forgot to close. Right? Wrong. The chances of those events (or similar events) occurring are very slim. Indeed, your data is more vulnerable where it now exists, but you have a false sense of security because you can hug your servers.

If you're playing with the public clouds just to say you're in them, while at the same time avoiding any potential downside, you're actually doing more harm than good. Cloud technology has evolved in the last five years, so put aside those old prejudices and assumptions. Now is the time to take calculated risks and get some of your data assets out on the cloud. Most of the Global 2000 will find value there.

After all the years of working on enterprise Analysis Services implementations, there were definitely some raised eyebrows when I had started running around with my MacBook Air on the merits of Hadoop – and Hadoop in the Cloud for that matter.

The reason for my personal interest in Big Data isn’t just because my web analytics background during my days at digiMine or Microsoft adCenter. In fact it was spurned by my years of working on exceedingly complex DW and BI implementations during the awesome craziness as part of the SQL Customer Advisory Team.

Saying all of this, after this fun ride, I am both excited and sad to announce that I will be leaving the SQL Customer Advisory Team and joining the SQL BI organization. It’s pretty cool opportunity as I will get to live the theme of Hadoop and BI are better together by helping to build some internet scale Hadoop and BI systems – and all within the Cloud! I will reveal more later, eh?!

Meanwhile, I will still be blogging and running around talking about Hadoop and BI – so keep on pinging me, eh?! And yes, SSAS Maestros is still very much going to be continuing – its its new home as part of the SQL BI Org.

Two weeks ago we announced many new upcoming Windows Azure features that are now in public preview. One of these is the new Windows Azure Virtual Machine (VM), which makes it very easy to deploy dedicated instances of SQL Server in the Windows Azure cloud. You can read more about this new capability here. SQL Server running in a Windows Azure VM can serve as the backing database to both cloud-based applications, as well as on-premise applications, much like Windows Azure SQL Database (formerly known as “SQL Azure”). This capability is our implementation of “Infrastructure as a Service” (IaaS).

Windows Azure SQL Database, which is a commercially released service, is our implementation of “Platform as a Service” (PaaS) for a relational database service in the cloud. The introduction of new IaaS capabilities for Windows Azure leads to an important question: when should I choose Windows Azure SQL Database, and when should I choose SQL Server running in a Windows Azure VM when deploying a database to the cloud? In this blog post, we provide some early information to help customers understand some of the differences between the two options, and their relative strengths and core scenarios. Each of these choices might be a better fit than the other depending on what kind of problem you want to solve.

The key criteria in determining which of these two cloud database choices will be the better option for a particular solution are:

Full compatibility with SQL Server box product editions

Control vs. cost

Database scale-out requirements

In general, the two options are optimized for different purposes:

SQL Database is optimized to reduce costs to the minimum amount possible. It provides a very quick and easy way to build a scale-out data tier in the cloud, while lowering ongoing administration costs since customers do not have to provision or maintain any virtual machines or database software.

SQL Server running in a Windows Azure VM is optimized for the best compatibility with existing applications and for hybrid applications. It provides full SQL Server box product features and gives the administrator full control over a dedicated SQL Server instance and cloud-based VM.

Compatibility with SQL Server Box Product Editions

From a features and compatibility standpoint, running SQL Server 2012 (or earlier edition) in a Windows Azure VM is no different than running full SQL Server box product in a VM hosted in your own data center: it is full box product, and the features supported just depend on the edition of SQL Server you deploy (note that AlwaysOn availability groups are targeted for support at GA but not the current preview release; and that Windows Clustering will not be available at GA). The advantage of running SQL Server in a Windows Azure VM is that you do not need to buy or maintain any infrastructure whatsoever, leading to lower TCO.

Existing SQL Server-based applications will “just work” with SQL Server running in a Windows Azure VM, as long as you deploy the correct edition. If your application requires full SQL Server Enterprise Edition, your existing applications will work as long as you deploy SQL Server Enterprise Edition to the Windows Azure VM(s). This includes features such as SQL Server Integration Services, Analysis Services and Reporting Services. No code migration will be required, and you can run your applications in the cloud or on-premise. Using the new Windows Azure Virtual Network, also announced this month, you will even be able to domain-join your Windows Azure VM running SQL Server to your on-premise domain(s).

This is critical to enabling development of hybrid applications that can span both on-premises and off-premises under a single corporate trust boundary. Also, VM images with SQL Server can be created in the cloud from stock image galleries provided within Windows Azure, or created on-premises from existing deployments and uploaded to Windows Azure. Once deployed, VM images can be moved between on-premises and the cloud with SQL Server License mobility, which is provided for those customers that have licensed SQL Server with Software Assurance (SA).

Windows Azure SQL Database, on the other hand, does not support all SQL Server features. While a very large subset of features are supported (and this set of features is growing over time), it is not full SQL Server Enterprise Edition, and differences will always exist based on different design goals for SQL Database as pointed out above. A guide is available on MSDN that explains the important feature-level differences between SQL Database and SQL Server box product. Even with these differences, however, tools such as SQL Server Management Studio and SQL Server Data Tools can be used with SQL Database as well as SQL Server running on premises and in a Windows Azure VM.

In a nutshell, running SQL Server in a Windows Azure VM will most often be the best route to migrate existing applications and services to Windows Azure given its compatibility with the full SQL Server box product, and for building hybrid applications and services spanning on-premises and the cloud under a single corporate trust boundary. However, for new cloud-based applications and services, SQL Database might be the better choice for reasons discussed further below.

Control vs. Cost

While SQL Server running in a Windows Azure VM will offer the same database features as the box product, SQL Database aims as service to minimize costs and administration overhead. With SQL Database, for example, you do not pay for compute resources in the cloud. Rather, you just pay a consumption fee per database based on the size of the database—from as little as $5.00 per month for a 100MB database, to $228.00 per month for a 150GB database (the current size limit for a single SQL Database database).

And while SQL Server running in a Windows Azure VM will offer the best application compatibility, there are two important features of SQL Database that customers should understand:

High Availability (HA) and 99.9% database uptime SLA built-in

SQL Database Federation

With SQL Database, high availability is a standard feature at no additional cost. Each time you create a Windows Azure SQL Database, that database is actually operating across a primary node and multiple online replicas, such that if the primary fails, a secondary node automatically replaces it within seconds, with no application downtime. This is how we are able to offer a 99.9% uptime SLA with SQL Database at no additional charge.

For SQL Server in a Windows Azure VM, the virtual machine instance will have an SLA (99.9% uptime) at commercial release. This SLA is for the VM itself, not the SQL Server databases. For database HA, you will be able to configure multiple VMs running SQL Server 2012 and setup an AlwaysOn Availability Group; but this will require some manual configuration and management, and you will pay extra for each secondary you operate—just as you would for an on-premises HA configuration.

With SQL Server running in a Windows Azure VM, not only can you control the operating system and database configuration (since it’s your dedicated VM to configure). But it is up to you to configure and maintain this VM over time, including patching and upgrading the OS and database software over time, as well as installing any additional software such as anti-virus, backup tools, etc. With SQL Database, you are not running in a VM, and have no control over a VM configuration. However, the database software is automatically configured, patched and upgraded by Microsoft in the data centers, so this lowers administration costs.

With SQL Server in a Windows Azure VM, you can also control the size of the VM, providing some level of scale up from smaller compute, storage and memory configurations to larger VM sizes. SQL Database, on the other hand, is designed for a scale-out vs. a scale-up approach to achieving higher throughput rates. This is achieved through a unique feature of SQL Database called Federation. Federation makes it very easy to partition (shard) a single logical database into many physical nodes, providing very high throughput for the most demanding database-driven applications. The SQL Database Federation feature is possible because of the unique PaaS characteristics of SQL Database and its almost friction-free provisioning and automated management. SQL Database Federation is discussed in more detail below.

Database Scale-out Requirements

Another key evaluation criterion for choosing SQL Server running in a Windows Azure VM vs. SQL Database will be performance and scalability. Customers will always get the best vertical scalability (aka ‘scale up’) when running SQL Server on their own hardware, since customers can buy hardware that is highly optimized for performance. With SQL Server running in a Windows Azure VM, performance for a single database will be constrained to the largest virtual machine image possible on Windows Azure—which at its introduction will be a VM with 8 virtual CPUs, 14GB of RAM, 16 TB of storage, and 800 MB/s network bandwidth. Storage will be optimized for performance and configurable by customers. Customers will also be able to configure and run AlwaysOn Availability Groups (at GA, not for preview release), and optionally get additional performance by using read-only secondaries or other scale out mechanisms such as scalable shared databases, peer-to-peer replication, Distributed Partitioned Views, and data-dependent routing.

With SQL Database, on the other hand, customers do not choose how many CPUs or memory: SQL Database operates across shared resources that do not need to be configured by the customer. We strive to balance the resource usage of SQL Database so that no one application continuously dominates any resource. However, this means a single SQL Database is in nature limited in its throughput capabilities, and will be automatically throttled if a specific database is pushed beyond certain resource limits. But via a feature called SQL Database Federation, customers can achieve much greater scalability via native scale-out capabilities. Federation enables a single logical database to be easily portioned into multiple physical nodes.

This native feature in SQL Database makes scale-out much easier to setup and manage. For example, with SQL Database, you can quickly partition a database into a few or even hundreds of nodes, with each node adding to the overall capacity of the data tier (note that applications need to be specifically designed to take advantage of this feature). Partitioning operations are as simple as one line of T-SQL, and the database remains online even during re-partitioning. More information on SQL Database Federation is available here.

Summary

We hope this blog has helped to introduce some of the key differences and similarities between SQL Server running in a Windows Azure VM (IaaS) and Window Azure SQL Database (PaaS). The good news is that in the near future, customers will have a choice between these two models, and the two models can be easily mixed and matched for different types of solutions.

Accurate estimation of the progress of database queries can be crucial to a number of applications such as administration of longrunning decision support queries. As a consequence, the problem of estimating the progress of SQL queries has received significant attention in recent years [6, 13, 14, 5, 12, 16, 15, 17]. The key requirement for all of these techniques (aside from small overhead and memory footprint) is their robustness, meaning that the estimators need to be accurate across a wide range of queries, parameters and data distributions.

Unfortunately, as was shown in [5], the problem of accurate progress estimation for arbitrary SQL queries is hard in terms of worst-case guarantees: none of the proposed techniques can guarantee any but trivial bounds on the accuracy of the estimation (unless some common SQL operators are not allowed). While the work of [5] is theoretical and mainly interested in the worst case, the property that no single proposed estimator is robust in general holds in practice as well. We find that each of the main estimators proposed in the literature performs poorly relative to the alternative estimators for some (types of) queries.

To illustrate this, we compared the estimation errors for 3 major estimators proposed in the literature (DNE [6], the estimator of Luo et al (LUO) [13] and the TGN estimator based on the Total GetNext model [6] tracking the GetNext calls at each node in a query plan) over a number of real-life and benchmark workloads (described in detail in Section 6). We use the average absolute difference between the estimated progress and true progress as the estimator error for each query and the compare the ratio of this error to the minimum error among all three estimators. The results are shown in Figure 1, where the Y-axis shows the ratio and the X-axis iterates over all queries, ordered by ascending ratio for each estimator – note that the Y-axis is in log-scale. As we can see, each estimator is (close to) optimal for a subset of the queries, but also degrades severely (in comparison to the other two), with an error-ratio of 5x or more for a significant fraction of the workload. No single existing estimator performs sufficiently well across the spectrum of queries and data distributions to rely on it exclusively.

However, the relative errors in Figure 1 also suggest that by judiciously selecting the best among the three estimators, we can reduce the progress estimation error. Hence, in absence of a single estimator that is always accurate, an approach that chooses among them could go a long way towards making progress estimation robust.

Unfortunately, there appears to be no straightforward way to precisely state simple conditions under which one estimator outperforms another. While we know that e.g., the TGN estimator is more sensitive to cardinality estimation errors than DNE, but more robust with regards to variance in the number of GetNext calls issued in response to input tuples, neither of these effects be reliably quantified before a query starts execution. Moreover, a large numbers of other factors such as tuple spills due to memory contention, certain optimizations in the processing of nested iterations (see Section 5.1), etc., all impact which progress estimator performs best for a given query.

From Proceedings of the VLDB Endowment, Vol. 5, No. 4. Bolin Ding is at the University of Illinois at UrbanaChampaign; the other three authors are at Microsoft Research.

Progress estimation is becoming more important with increasing adoption of Big Data technologies. Similar work is going on for estimating MapReduce application progress.

Competition is heating up for Platform as a Service (PaaS) providers such as Microsoft Windows Azure, Google App Engine, VMware Cloud Foundry and Salesforce.com Heroku, but cutting compute and storage charges no longer increases PaaS market share. So traditional Infrastructure as a Service (IaaS) vendors, led by Amazon Web Services (AWS) LLC, are encroaching on PaaS providers by adding new features to abstract cloud computing functions that formerly required provisioning by users. For example, AWS introduced Elastic MapReduce (EMR) with Apache Hive for big data analytics in April 2009. In October 2009, Amazon added a Relational Database Services (RDS) beta to its bag of cloud tricks to compete with SQL Azure.

Microsoft finally countered with a multipronged Apache Hadoop on Windows Azure preview in December 2011, aided by Hadoop consultants from Hortonworks Inc., a Yahoo! Inc. spin-off. Microsoft also intends to enter the highly competitive IaaS market; a breakout session at the Microsoft Worldwide Partner Conference 2012 will unveil Windows Azure IaaS for hybrid and public clouds. In late 2011, Microsoft began leveraging its technical depth in business intelligence (BI) and data management with free previews of a wide variety of value-added Software as a Service (SaaS) add-ins for Windows Azure and SQL Azure (see Table 1).

Codename

Description

Link to Tutorial

“Social Analytics”

Summarizes big data from millions of tweets and other unstructured social data provided by the “Social Analytics” Team

Table 1. The SQL Azure Labs team and the StreamInsight unit have published no-charge previews of several experimental SaaS apps and utilities for Windows Azure and SQL Azure. The Labs team characterizes these offerings as "concept ideas and prototypes," and states that they are "experiments with no current plans to be included in a product and are not production quality."

In this article, I'll describe how the Microsoft Hadoop on Windows Azure project eases big data analytics for data-oriented developers and provide brief summaries of free SaaS previews that aid developers in deploying their apps to public and private clouds. (Only a couple require a fee for the Windows Azure resources they consume.) I'll also include instructions for obtaining invitations for the previews, as well as links to tutorials and source code for some of them. These SaaS previews demonstrate to independent software vendors (ISVs) the ease of migrating conventional, earth-bound apps to SaaS in the Windows Azure cloud.

…

This article went to press before the Windows Azure Team’s Meet Azure event on 6/7/2012, where the team unveiled the “Spring Wave” of new features, upgrades and updates to Windows Azure, including Windows Azure Virtual Machines, Virtual Networks, Web Sites and other new and exciting services. Also, the team terminated Codenames “Social Analytics” and “Data Transfer” projects in late June. However, as of 6/27/2012, the “Social Analytics” data stream from the Windows Azure Marketplace Data Market was still operational, so the downloadable C# code for the Microsoft Codename “Social Analytics” Windows Form Client still works.

Note: I modified my working version of the project to copy the data from about a million rows in the DataGridView to a DataGrid.csv file, which can be loaded on demand. Copies of this file and the associated source file for the client’s chart are available from my SkyDrive account. I will update the sample code to use the DataGrid.csv file if the Data Market stream becomes unavailable.

Have been spending time here at TechEd EMEA and one of the topics I presented on this week was how you can built hybrid applications using SharePoint Online and Windows Azure. I think there’s an incredible amount of power here for building cloud apps; it represents a great cloud story and one that complements the O365 SAAS capabilities very well.

I’ve not seen a universally agreed-upon definition of hybrid, so in the talk we started by defining a hybrid application as follows:

Within this frame, I then discussed four hybrid scenarios that enable you to connect to SharePoint Online (SPO) in some hybrid way. These scenarios were:

Leveraging Windows Azure SQL Data Sync to synchronize on-premises SQL Server data with Azure SQL Database. With this mechanism, you can then sync your data from on-premises to the cloud and then consume using a WCF service and BCS within SPO, or wrap the data in a REST call and project to a device.

Service-mediated applications, where you can connect cloud-to-cloud systems (in this case I used an example with Windows Azure Data Marketplace) or on-premises-to-cloud systems (where I showed an on-premises LOB example to SPO example). Here, we discussed the WCF, REST, and Service Bus—endpoints and transport vehicles for data/messages.

Cloud and Device apps, which is where you can take a RESTified service around your data and expose it to a device (in this case a WP7 app).

Windows Azure SP Instance on the new Virtual Machine (IAAS) to show how you can pull on-premises data using the Service Bus and interact with PAAS applications built using WCF and Windows Azure and expose those in SP.

You can view the deck for the session below. (It’s not up now, but you should be able to You can view the session here on Channel 9 soon.)

The areas for discussion represented four patterns for discussion around how you can integrate cloud and on-premises systems to build some really interesting hybrid applications—and then leverage the collaborative power of SPO.

Some things we discussed during the session that are worth calling out here:

In many cases, when building hybrid cloud apps that integrate with SPO, you’ll be leveraging some type of ‘service.’ This could be WCF, REST, or Web API. Each has its own merits and challenges. If you’re like me and don’t like spending time debugging XML config files, then I would recommend you take a look at the new Web API option for building services. It uses the MVC method and you can use the Azure SDK to build Mobile apps as well as vanilla Web API apps.

I’ve seen some discussion on the JSONP method when issuing cross-domain calls for services. I would argue this is okay for endpoints/domains you trust; however, always take care when leveraging methods that are injecting script into your page—this allows for malicious code to be run. And given you’re executing code on the client, malicious code could be run that pooches your page—imagine a hack that attempts to use the SPCOM to do something malicious to your SPO instance. Setting header formatting in your service code can also be a chore.

Cross-origin resource sharing (CORS) is an area I’m looking into as a more browser-supported method of cross-domain calls. This enables you to specify or set a wildcard (“*”) flag and pass back to the browser to accept the cross-domain call.

JSON is increasingly being used in building web services, so ensure you’re up to speed with what jQuery has to offer. Lots of great plug-ins, plus you then have a leg up when looking at building apps through, say, jQuery for mobile.

All in all, there’s a ton of options available for you when building SPO apps, and I believe that MS has a great story here for building compelling cloud applications.

In this OData 101, we will build a trivial OData consumption app that displays some titles from the Netflix OData feed along with some of the information that corresponds to those titles. Along the way, we will learn about:

Adding service references and how adding a reference to an OData service is different in Visual Studio 2012

NuGet package management basics

The LINQ provider in the WCF Data Services client

Getting Started

Let’s get started!

First we need to create a new solution in Visual Studio 2012. I’ll just create a simple C# Console Application:

From the Solution Explorer, right-click the project or the References node in the project and select Add Service Reference:

This will bring up the Add Service Reference dialog. Paste http://odata.netflix.com/Catalog in the Address textbox, click Go and then replace the contents of the Namespace textbox with Netflix:

Notice that the service is recognized as a WCF Data Service (see the message in the Operations pane).

Managing NuGet Packages

Now for the exciting part: if you check the installed NuGet packages (right-click the project in Solution Explorer, choose Manage NuGet Packages, and select Installed from the left nav), you’ll see that the Add Service Reference wizard also added a reference to the Microsoft.Data.Services.Client NuGet package!

This is new behavior in Visual Studio 2012. Any time you use the Add Service Reference wizard or create a WCF Data Service from an item template, references to the WCF Data Services NuGet packages will be added for you. This means that you can update to the most recent version of WCF Data Services very easily!

NuGet is a package management system that makes it very easy to pull in dependencies on various libraries. For instance, I can easily update the packages added by ASR (the 5.0.0.50403 versions) to the most recent version by clicking on Updates on the left or issuing the Update-Package command in the Package Manager Console:

NuGet has a number of powerful management commands. If you aren’t familiar with NuGet yet, I’d recommend that you browse their documentation. Some of the most important commands are:

In this sample, we start with all of the titles, filter them down using a compound where clause, order the results, take the top ten, and create a projection that returns only portions of those records. Then we write titles.ToString() to the console, which outputs the URL used to query the OData service. Finally, we iterate the actual results and print relevant data to the console:

Summary

Here’s what we learned in this post:

It’s very easy to use the Add Service Reference wizard to add a reference to an OData service

In Visual Studio 2012, the Add Service Reference wizard and the item template for a WCF Data Service add references to our NuGet packages

Shifting our distribution vehicle to NuGet allows people to easily update their version of WCF Data Services simply by using the Update-Package NuGet command

The WCF Data Services client includes a powerful LINQ provider that makes it easy to compose OData queries

Good news for all you PHP developers out there: I am happy to share with you the availability of Windows Azure SDK for PHP, which provides PHP-based access to the functionality exposed via the REST API in Windows Azure Service Bus. The SDK is available as open source and you can download it here.

This is an early step as we continue to make Windows Azure a great cloud platform for many languages, including .NET, Java, and PHP. If you’re using Windows Azure Service Bus from PHP, please let us know your feedback on how this SDK is working for you and how we can improve them. Your feedback is very important to us!

Openness and interoperability are important to Microsoft, our customers, partners, and developers. We believe this SDK will enable PHP applications to more easily connect to Windows Azure making it easier for applications written on any platform to interoperate with one another through Windows Azure.

Hi all! I am typing this post from Shiphol, Amsterdam’s airport, where I am waiting to fly back to Seattle after a super-intense 2 days at TechEd Europe.

As it is by now tradition for Microsoft’s big events, the video recording of the breakout sessions are available on Channel9 within 24 hours from the delivery. Yesterday I presented “A Lap Around Active Directory”, and the recording punctually just popped up: check it out!

The fact that the internet connectivity was down for most of the talk is unfortunate, although I am told it made for a good comic relief. Sorry about that, guys!

Luckily I managed to go through the main demo, which is what I needed for making my point. The other demos I planned were more of a nice to have.

I wanted to query the Directory Graph from Fiddler or the RestClient Firefox plugin, to show how incredibly easy it is to connect with the directory and navigate relationships: however I did have the backup slide showing a prototypical query and results in JSON, albeit less spectacular it hopefully conveyed the point.

The other thing I wanted to show you was a couple of projects which demonstrate web sign on with the directory from PHP and Java apps: given that I had those running on a remote machine (my laptop’s SSD does not have all that room) the absence of connectivity killed the demo from the start; once again, though, those would have demonstrated the same SSO feature I have shown with the expense reporting app; and I would not have been able to show the differences in code anyway, given that the session was a 200. So, all in all a lot of drama but not a lot of damage after all.

Thanks again for having shown up at the session, and for all the interesting feedback at the book signing. Windows Azure Active Directory is a Big Deal, and I am honored to have had the chance to be among the first to introduce it to you. The developer preview will come out real soon, and I can’t wait to see what you will achieve with it!

WAAD is the cloud complement to Microsoft’s Active Directory directory service. Here’s more about Microsoft’s thinking about WAAD, based on the first of Shewchuk’s posts. It already is being used by Office 365, Windows InTune and Windows Azure. Microsoft’s goal is to convince non-Microsoft businesses and product teams to use WAAD, too.

This is how the identity-management world looks today, in the WAAD team’s view:

And this is the ideal and brave new world they want to see, going forward.

“Because Windows Azure Active Directory integrates with both consumer-focused and enterprise-focused identity providers, developers can easily support many new scenarios—such as managing customer or partner access to information—all using the same Active Directory–based approach that traditionally has been used for organizations’ internal identities.”

Microsoft announced the developer preview for WAAD on June 7. This preview includes two capabilities that are not currently in WAAD as it exists today, Shewchuk noted. The two: 1. The ability to connect and use information in the directory through a REST interface; 2. The ability for third-party developers to connect to the SSO the way Microsoft’s own apps do.

The preview also will “include support for integration with consumer-oriented Internet identity providers such as Google and Facebook, and the ability to support Active Directory in deployments that span the cloud and enterprise through synchronization technology,” he blogged.

The June 7th update to Windows Azure introduced two new services (Widows Azure Websites and persistent VMs) that beg the question “When should I use a Windows Azure Website vs. a Web Role vs. a VM?” That’s exactly the question I’ll try to help you answer in this post. (I say “help you answer” because there is no simple, clear-cut answer in all cases. What I’ll try to do here is give you enough information to help you make an informed decision.)

The following table should give you some idea of what each option is ideal for:

Actually, I think the use cases for VMs are wide open. You can use them for just about anything you could imagine using a VM for. The tougher distinction (and decision) is between Web Sites and Web Roles. The following table should give you some idea of what Windows Azure features are available in Web Sites and Web Roles:

* Web or Worker Roles can integrate MySQL-as-a-service through ClearDB's offerings, but not as part of the Management Portal workflow.

As I said earlier, it’s impossible to provide a definitive answer to the question of which option you should use (Web Sites, Web Roles, or VMs). It really does depend on your application. With that said, I hope the information in the tables above helps you decide what is right for your application. Of course, if you have any questions and/or feedback, let us know in the comments.

With the new Windows Azure Web Sites it is very easily to use your favorite deployment tool/technology to deploy the web solution to Windows Azure. You can choose from continuous delivery with Git or TFS, or use tools like FTP or WebDeploy.

As long as you have at least the June 2012 Windows Azure tools update for Visual Studio 2010 (or 2012), using WebDeploy to deploy a Windows Azure Web Site is really easy. Simply import the .publishsettings file, Visual Studio reads the pertinent data and populates the WebDeploy wizard. If you’re not familiar with this process, please have a look at https://www.windowsazure.com/en-us/develop/net/tutorials/web-site-with-sql-database/#deploytowindowsazure as the process is explained very well there, and even has nice pictures.

But what if you don’t have the latest tools update? Or, what if you don’t have any Windows Azure tools installed? After all, why should you have to install Windows Azure tools to use WebDeploy?

You can surely use WebDeploy to deploy a web app to Windows Azure Web Sites – you just have to do a little more manual configuration. You do what Visual Studio does for you as part of the latest Windows Azure tools update.

To get started with Windows Azure Web Sites, if you don’t already have Windows Azure, sign up for a FREE Windows Azure 90-day trial account. To start using Windows Azure Web Sites, request access on the ‘Preview Features’ page under the ‘account’ tab, after you log into your Windows Azure account.

Azure Virtual Machines are still new to everyone, and I got a great question from a partner a few days ago: “I have an Azure Virtual Machine set up just the way I want it, now I want to spin up multiple instances, how do I do that?”

In the “picture is worth 1,000 words” category (and I don’t have time to write 1,000 words), please see the following sequence of screen shots for the answer.

Things to note:

run sysprep (%windir%\system32\sysprep) with “generalize” so each machine will have a unique SID (security ID)

when you “capture” the VM, it will be deleted. You can re-create it from the image as shown below

In one of my previous blog articles, I demonstrated how to build a demonstration or development environment for Microsoft Dynamics CRM 2011 using Windows Azure Virtual Machine technology. Once you get a basic virtual machine installed, you will likely want to back it up, or clone it. This article will show you how to manage your virtual machines once you get them set up.

Background

Before I go into some of the details, I want to give a little background on how Windows Azure manages hard disk images that it uses for virtual machines. When you first create a virtual machine, you start with an image, and that image can be one that you have 'captured', or it can be one from the Azure gallery of images. Microsoft provides basic installs of Windows Server 2008 R2, various Linux distros and even Windows Server 2012 Release Candidate. These base images are basically unattended installs that Microsoft (or you) maintain. When you create a new virtual machine, the Azure fabric controller will start the machine up in provisioning mode, which will allow Azure to specify the password for your virtual machine. The machine will initialize itself when it boots for the first time. The new virtual machine will have a single 30GB hard drive that is attached as the C: drive and used for the system installation, as well as D: drive that is used for the swap file and temporary storage.

One of the new features that makes Azure Virtual Machines possible is that the hard drive is now durable, and they do this by storing the hard drive blocks in Azure blobs. This means that your hard drive now can benefit from the redundancy and durability that is baked into the Azure blob storage infrastructure, including multiple geo-distributed replica copies. The downside is that you are dealing with blob storage which is not quite as fast as a physical disk. The operating system volume (C: drive) is stored in Azure blobs, but the temporary volume (D: drive) is not stored in blob storage. As such, the D: drive is a little faster, but it is not durable and should not be used to install applications or their permanent data on it.

You can create additional drives in Windows Azure and attach them to your Azure virtual machine. The number of drives that can be attached to a virtual machine is determined by the virtual machine size. For most real-world installations, you are going to want to create an additional data drive and attach it to your Azure virtual machine. Keep in mind that your C: drive is only 30GB and it will fill up when Windows applies updates or other middleware components get installed. If you have a choice of where to install any application, choose your additional permanent data drive over the system drive whenever possible. …

Shan continues with a detailed, illustrated description of the cloning process.

Following are links to Shan’s two earlier posts on related Virtual Machine topics (missed when posted):

Editor’s Note: Today’s guest blog post comes from Matthew Pardee, Developer Evangelist at Cloud9 IDE, which is an online platform for development, where all the code is open-source, free to adapt and use for everyone, anywhere, anytime.

Running production code on different computers and software stacks is a pain. Worse, it can be a huge distraction for developers where they would otherwise focus on a singular goal.
Yet many teams still work this way. We suffer through the long periods of managing configurations and reconciling platform differences so everyone on the team can start doing... what exactly? Oh yeah, what they do best: code.

There is an opportunity beyond just having the same IDE that everyone uses to get work done. And that opportunity is where Cloud9 has maintained its focus, and where it continues to innovate. It is the potential of a workflow that exists entirely in the cloud.

Cloud9 is the only cloud-based development platform that offers Windows Azure and Windows Azure Web Sites integration. Here are the features Cloud9 released this week that support this vision and makes developing applications for Windows Azure even easier:

Collaboration

Now developers around the world can edit the same code and chat together in real-time. Think of how productive pair programming and code reviews will be, how much more effective presentations are when an audience is truly involved. And how rewarding it is to teach a group of students the art of programming.

Your Workspace in the Cloud

This is the feature that powers every project with its own runtime environment. And it’s the platform professional developers have been waiting for. Now you can compile with gcc. Run the Python and Ruby interpreters. And remember those platform differences you had to maintain for every member of your team? Now when your team works, they are all running on the same OS and software stack. Premium accounts get a full-blooded terminal to interact with their server like they would their local system.

Sync Locally, Work Offline

Cloud9 IDE is now installable as a small app for your desktop - but it’s much more than a desktop IDE. You can keep your hosted projects synced locally so you can keep developing, even when offline. And those desktop projects you had before using Cloud9? They can be pushed to c9.io, giving them all the power and freedom of coding and collaborating in the cloud.

Code Completion

The depth and sophistication of JavaScript analysis - once thought the domain of desktop IDEs - is now on Cloud9. As you type, code suggestions appear below your code. Plus hovering over suggestions shows helpful JavaScript and Node.js documentation. Type Ctrl-Shift-E or Cmd-Shift-E to open the outline view, and quickly navigate to methods in the active file.

Refined Tooling

These features are in addition to the finessed tooling that Cloud9 has been refining for the past year: extremely quick file access, robust search-in-files, in-browser debugger for Node.js, a capable console for running SCM and IDE commands, a rich and full-featured editor, and a beautiful UI. And there is a lot more Cloud9 has to offer.

Deployment: See your Code Come to Life in The Cloud

Cloud9 has been in lockstep with Windows Azure since January of this year when we released support for Windows Azure at Node Summit. And we worked early with Microsoft to integrate Windows Azure Web Sites, unveiling support for the platform right out the gate.

Windows Azure leads the Platform-as-a-Service (PaaS) field in SLA and latency response. They have data centers in America, Europe and Asia, and they offer an integrated suite of service offerings that make application development on their platform a natural choice for any application.

With this release Cloud9 is introducing a new way of getting work done. We are excited to get your feedback as you try these features, and more, at c9.io.

Editor’s Note: Today’s post, written by Linx e-Commerce Program Manager Fernando Chaves [pictured at right], describes how the company uses Windows Azure to scale out its LinxWeb Point-of-Sale system for its customers.

Linx is a 26 year-old ISV, and leader of ERP technologies to the retail market in Latin America. We have more than 7,500 customers in Brazil, Latin America and Europe, with more than 60,000 installed Point of Sales (POS) systems. Our company has more than 1,800 employees at our headquarters and branches, and a network of partners spread throughout Brazil and abroad.

LinxWeb is a white-label B2C e-Commerce solution that our customers can use as a new POS system in their sales environment. It's integrated with customer on-premises ERP environments, and can be managed just like a traditional POS system while allowing specific customizations such as promotions.

Setting the Stage: Before Windows Azure

Before migrating to Windows Azure, LinxWeb operated on virtual machines (VMs) running in a traditional hosting provider. Though, in theory, this kind of deployment could scale out, it was not easy and fast to achieve and often we needed to scale-up, adding more memory, computing power or network bandwidth to the VM.

LinxWeb was originally single-tenant where every customer had his or her own deployment and environment. Customization was done directly on the customer’s web content files, which could lead to security issues, quality control issues and generation of excessive support requests due to customization errors.

Before Windows Azure migration, the web site was responsible for every processing task: generating product image thumbnails, sending e-mails, and communicating with third-party systems. Every task was done synchronously, impacting e-commerce web site performance and availability to end customers.

The Migration to Windows Azure

When we decided to migrate LinxWeb to Windows Azure, some refactoring was needed, to make it compatible with the stateless nature of Windows Azure web roles and load balancer.

Since each web request could be sent to any web server instance, we needed to externalize session data. We chose Windows Azure SQL Database for our session storage.

We had to remove all file writing to the local disk, since local disk storage isn’t shared between server instances. Additionally, local disk is not durable, unlike Blob Storage or SQL Database, which have replicated disks. Local disks are designed for speed and temporary usage, rather than permanent storage.

Media content, initially saved in SQL Server (in BLOB columns), now are stored in Windows Azure Blob storage, allowing better scalability for the website, since blob content can be cached on Windows Azure Content Delivery Network (CDN) edge cache. Also, by storing only a blob reference in the SQL Database, rather than an entire media object, we are able to keep the size of our SQL Database much smaller, helping us avoid storage limits on individual SQL databases (supported up to 150GB at the time).

Since blobs (and CDN) are referenced with a URL, browser requests for media now go directly to CDN, bypassing our web role instances (and taking load off IIS and database). This modification presented an average 75% reduction on database size and also saved money on storage costs, since blob storage is much cheaper than SQL Database. We also saw response time improve on our Web Role instances, since considerable load was taken off of these servers.

To better use the environment resources, the Windows Azure version was made with multi-tenancy in mind, where multiple customers share compute resources, reducing hosting costs. Understanding some customers may want an isolated environment, we also have a premium offer where a customer receives a dedicated deployment. On this new version, the customer doesn’t update ASP.NET pages directly to change the site layout and look and feel any more. They are allowed to change templates stored in Windows Azure Blob storage, and then the ASP.NET pages process those templates to render updated html to the end user.

Worker roles were used to handle background tasks such as generating picture thumbnails and sending e-mails. Those tasks are queue-driven, using Windows Azure Queues. The worker roles are also responsible for running scheduled tasks, mainly for communication with third party systems. To manage the time handling, we used the Quartz.Net framework, which has the option to run synchronized on multiple worker role instances. This is a very important point: If a scheduler is set up to run in a worker role, that scheduler runs in all instances. Quartz.Net ensures that only one scheduler instance runs at any given time.

Some customers also may want to host a company website or a blog together with their e-commerce site. To solve this need, we use WordPress as our blog engine. WordPress is PHP-based and, by default, the PHP runtime libraries are not installed in Windows Azure web or worker roles. Since our WordPress blog runs on Windows Azure web roles, we needed to install the required PHP components as well as WordPress itself. We did this with startup tasks and Web Platform Installer Command Line to setup the PHP runtime on IIS. A Windows Azure SQL Database is used as persistent storage, as well as Windows Azure Blob storage, so we also installed the Windows Azure Storage plugin for WordPress, which uploads files from users directly to blob storage.

Conclusion

For us, the main benefit of migrate our solution to Windows Azure is how easy and fast it is to scale out the application. This lets us focus on business needs from our customers and support marketing campaigns handling a large number of requests from our end users.

For our customers, a big benefit is that they no longer need to worry about infrastructure and operational system management.

As pointed out, we had a few technical challenges to solve, none of them insurmountable:

Moving from single- to multi-tenancy

Moving local storage and SQL storage to blob storage and CDN

Scheduling tasks with Quartz.net across multiple role instances

Installing PHP runtime and WordPress

Refactoring web request handling to be stateless and scalable across multiple instances

We were able to handle all of these challenges and now have a very efficient application running in Windows Azure!

When MOC1 Solutions wanted to move their applications supporting automotive dealerships to the cloud, they chose Windows Azure.

In this video, Software Development Manager Alex Hatzopoulos and Architect Greg Cannon speak with Microsoft Principal Architect Evangelist Brian Loesgen. In this wide ranging conversation, they cover their experiences in ramping up their team, setting up their environments, and share other first-hand application migration experience gained while moving their flagship Wireless Service Advisor™ (WSA™) product to Windows Azure.

WSA uses wireless and mobile technologies to streamline and standardize the Repair Order (RO) write-up process. WSA enables a service advisor to greet customers at their vehicle when they arrive at the dealership service department. Using a tablet PC, the service advisor scans or hand-writes the Vehicle Identification Number (VIN) or license plate number and transmits the information to multiple databases to retrieve critical customer and vehicle data related to that particular vehicle identifier – the critical data includes repair history, recommended services, warranty and recall information, and customer contact information.

Additionally, the WSA allows the service advisor to complete a full inspection process, handle customer's questions, and provide maintenance recommendations in a timely and interactive fashion, all while standing at the customer's vehicle. The customer can provide service authorization by signing the RO on the tablet PC so they can avoid having to wait for a printed copy. The WSA also allows for the preparation of a printed repair order as well as the update of the DMS database. The WSA presents a user-friendly front-end application that both effectively represents the entire Repair Order write up process and efficiently standardizes the Repair Order write-up procedure. The WSA™ accomplishes all this via an Azure-based backend.

About MOC1 Solutions

Based in in Glendora, CA MOC1 Solutions is a traditional ISV that was founded in 2005 and was incubated in MOC Products until June 2006 when the company was spun out as an independent private entity. MOC1 offers software applications used by automotive dealership service departments and vehicle service facilities.

Join Tim Huckaby, Founder of InterKnowlogy and Actus Interactive Software, and Steve Fox, Director of Global Windows Azure Center of Excellence, as they discuss trends in big data, cloud and devices. Steve unveils his thoughts on the practical side of the cloud as well as some interesting stories about emerging cloud uses. Great Interview!

Steve Fox has worked at Microsoft for 12 years across a number of different technologies including natural language, search, social computing, and more recently Office, SharePoint and Windows Azure development. He is a Director in MCS and regularly speaks to many different audiences about building applications on Microsoft technology, with a specific focus on the cloud. He has spoken at several conferences, contributed to technical publications, and co-wrote a number of books including Beginning SharePoint 2010 Development (Wrox), Developing SharePoint Solutions using Windows Azure (MSPress), and the forthcoming Professional SharePoint 2010 Cloud-Based Solutions.

Some of you may remember a little project I created last year called RubyRole, which let you host Ruby applications as a Windows Azure hosted service (now called a cloud service.) I updated it the other day to fix a few issues with the recent spring update to Windows Azure, and discovered some interesting things with the new Windows Azure PowerShell.

Windows Azure PowerShell and Ruby

The Windows Azure PowerShell is included with the Windows version of the Windows Azure Node.js SDK and PHP SDK, but I've discovered that it's generically useful for deploying and managing cloud services like RubyRole (or really anything that's a cloud service, such as Rob Blackwell's AzureRunme.) The trick is that the project has to have updated ServiceConfiguration and ServiceDefinition files for the spring update, and that there has to be a deploymentSettings.json file in the root of the project.

Emulation

With the Other SDK, I had to use run.cmd to launch the application in the emulator. It worked, but it didn't have a stop and you had to be in the right directory when you ran the command, etc. Windows Azure PowerShell provides the following cmdlets for working with the emulator:

Start-AzureEmulator - Starts the emulator. The optional -launch switch will launch your browser to the URL of the application in the emulator once it's started.

Stop-AzureEmulator - Stops the emulator.

Importing Subscription Info

While the emulator commands work out of the box, the rest of the commands I talk about below require some information about your subscription. This is a pretty painless process, and you just have to do it once. Here's the steps:

Download a .publishsettings file that contains your subscription information plus a management certificate that lets you manage your Windows Azure services from the command line. You can do this using the Get-AzurePublishSettingsFile cmdlet.

This will launch the browser and prompt you to login, then download the .publishsettings file. Save this file somewhere secure, as it contains information that can be used to access and manage your subscription.

Import the .publishsettingsfile by using the Import-AzurePublishSettingsFile cmdlet as follows:

Import-AzurePublishSettingsFile <path-to-file>

This will import the information contained in the file and store it in the .azure directory under your user directory.

Note: You should delete the .publishsettings file after the import, as anyone can import it and use it to manage your subscription.

That's it. Once you do those steps, you're set for publishing/managing your services from PowerShell.

Note: If you have multiple Windows Azure Subscriptions associated with your login, the .publishsettings file will contain information for all of them and will default to one of them. You can see them all by using Get-AzureSubscription and can set the default by using Set-AzureSubscription. Many commands also allow you to use a -subscription parameter and specify the subscription name to indicate which subscription to perform the action against.

Deployment

Prior to the spring update the only way to deploy the RubyRole project was to use the pack.cmd batch file, which only packaged up the service; you still had to manually upload it. Windows Azure PowerShell provides functionality to pack and deploy the application straight from the command line.

The cmdlet to deploy the project is Publish-AzureServiceProject. This packages up the project, creates a Windows Azure storage account if one isn't already available, and uploads the project to it. Then it creates a cloud service out of the project and starts it.

This command also takes a -launch parameter, which will launch your browser and navigate to the hosted application after the cloud service is up and running.

Remote Desktop

So you've got an application in RubyRole and it works in the emulator, but blows up in the cloud. What to do? Your first step should probably be to use the Enable-AzureServiceProjectRemoteDesktop cmdlet to turn on remote desktop for the project. Once deployed, you can then use the Windows Azure portal to remote into the virtualized environment, look at logs, debug, etc. To turn this off, just use Disable-AzureServiceProjectRemoteDesktop.

Unfortunately this still only works with the Windows remote desktop client.

Management

There are several other management style things you can do with Windows Azure PowerShell, such as:

Stop-AzureService - stop a running cloud service

Remove-AzureService - removes a cloud service

Start-AzureService - starts a stopped cloud service

You can get a full list of the basic developer cmdlets by running help node-dev or help php-dev and a full list of the Windows Azure cmdlets by running help azure.

Summary

As you can see, these are much better than the simple run.cmd and pack.cmd functions available in the Other SDK. They make it much easier for working with projects like RubyRole from the command line. For more information on Windows Azure PowerShell, see How to Use Windows Azure PowerShell.

Note: These cmdlets won't work with the older version of RubyRole because of some changes to the ServiceDefinition and ServiceConfig structure. Also because they rely on a deploymentSettings.json file in the root of your web project.

In part 1 of a three-article series, Alessandro Del Sole, author of Microsoft Visual Studio LightSwitch Unleashed, describes a useful addition to LightSwitch: support for Open Data Protocol (OData) data sources. Learn how to work with OData services from your LightSwitch apps.

This article describes a new, important addition to Visual Studio LightSwitch 2012, which is the support for data sources of type Open Data Protocol (OData). I'll explain how to consume OData services in your LightSwitch applications. In part 2 of this series, I'll show you how to expose LightSwitch data sources as OData services to other clients. …

Background I develop applications for the Global Security, Investigations and Legal Departments of a Fortune 100 company in Silicon Valley with offices all over the world. Lately almost all applications I develop involve looking up Employee information, I'm always asked can you include the ability to look up the 1st and 2nd level managers?

Add a "List and Details Screen" and for Screen Data choose the EmployData Table and name this new screen EmployeeDetails.
Your screen should look similar to this:
Notice I'm leaving the First Level and Second Level Manager fields in place, if you do not use Active Directory you are still free to enter this information manually. And if you do use Active Directory these values will be overwritten by the returned values from AD.

We now need to switch to File View and add two files from the Active Directory Sample, ActiveDirectoryHelper.cs and ApplicationDataService.csRight Click the Server folder and select Add-Existing Item then navigate to ActiveDirectoryHelper.cs, this is located in LightSwitch Active Directory Sample\C#\LDAP_CS_Demo\Server, we will need to make a couple changes to this file shortly.

If the folder UserCode has not been created yet, create a new folder under the Server folder and name it UserCode, Right Click UserCode and select Add-Existing Item, then navigate to ApplicationDataService.cs, this is located at LightSwitch Active Directory Sample\C#\LDAP_CS_Demo\Server\UserCode. We will need to make a couple changes to this file as well.

Open ActiveDirectoryHelper.cs, this file contains the most common fields in Active Directory but if your company has custom fields you will need to add them to this file if you intend on using them as search parameters. Add the following fields to the String Constants Region. Keep in mind these values may be different in your company's Active Directory, consult your administrator or if you have the appropriate privs use the LDAP Browser.
Add the following:
public const string EMPLOYEEID = "sAMAccountName";
public const string HIREDATE = "whenCreated";
public const string COUNTRY = "co";
The last two entries are not really necessary for this tutorial but you may find them to be useful if you wish to expand on this tutorial. You can now close the ActiveDirectoryHelper.cs we are finished with this file for now.

Add two references to your Server project one for System.DirectoryServices and another for System.Configuration (We will use ConfigurationManager to read in values from our Web.config file, as you'll see later on).

Build the project then open the EmployeeData table in the designer and click the Write Code dropdown and select EmployeeDatas_Inserting, this will open the ApplicationDataService.cs file and you will now find the EmployeeDatas_Inserting method stub has been added for you. Important TIP: You can't call the server code directly from the client, all interactions between the client and server happen within the Save, Inserting, and Updating pipelines with LightSwitch applications.

Ok in the default ApplicationDataService.cs, the LDAP directory is currently hard coded, this is not ideal if you wish to sell or give away your application to other clients, this value will change from client to client, so to make it possible for the clients Administrator to modify this setting and allow our code to read in whatever the current value may be, we need to modify our Web.config file, you can find this file using Windows Explorer, ADEmployeeSearch\ADEmployeeSearch\Server.
Add the following two entries after <appsettings> xml tag
<add key="LDAPAVAILABLE" value="true"/>
<add key="LDAPDirectory" value="LDAP://dc=[your domain],dc=com"/>
***NOTE: You need to replace [your domain] consult your Network Administrator

In ApplicationDataService.cs delete the line
string domain = @"LDAP://mydomain.foo.com";
and add the following two declarations:
// We use ConfigurationManager to read in the values we added to our Web.config file
string domain = ConfigurationManager.AppSettings["LDAPDirectory"];
Boolean ActiveDirectoryAvailable = Convert.ToBoolean(ConfigurationManager.AppSettings["LDAPAVAILABLE"]);

Delete the methods DistributionLists_Inserting and CreateMembers
We won't be using them in this example.

Build the project once again and correct any errors and if all is well launch the application, Enter an EmployeeID, click OK, click Save. You should now have access to the FirstLevelManager and SecondLevelManager information.

From here you can expand on this tutorial to bring back additional data as required.

Since we announced the HTML client a couple weeks ago, the community has been very anxious to try out the bits (understandably!) so I’m super excited that the release is available today. Please keep up the great conversations and provide us feedback in the LightSwitch HTML Client forum.

The new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client that addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices. With this download you will be able to set up a Virtual Hard Disk (VHD) for evaluation purposes that contains all you need to build, publish, and run touch-centric business applications using LightSwitch. The VHD contains a tutorial to help guide you through the available features. A setup doc is included in the download that contains instructions on how to set up the VHD.

This morning I presented the keynote at TechEd Europe 2012 in Amsterdam, and shared some updates on our tools. If you’re not attending the event in person, you can still tune in online. The keynote video recording is available via live streaming on Channel9, and will be posted on-demand to the TechEd Europe 2012 event page.

The first announcement you’re likely to hear about is our LightSwitch HTML Client Preview release...

LightSwitch HTML Client Preview Availability

At TechEd North America 2012, I showed how LightSwitch is embracing a standards based approach with HTML5, JavaScript and CSS, so you can build companion touch-centric apps that run on multiple devices. This approach allows you to take advantage of the same backend services you’re using across your applications, as well as the productivity gains of LightSwitch.

We’re excited to announce that the LightSwitch HTML Client Preview will be available later today for MSDN subscribers (I’ll update this post once the bits are live), and will be available publicly on Thursday June 28th! To learn more about the release, provide feedback, or ask questions, please visit the LightSwitch Developer Center, team blog, and forums.

Visual Studio 2012 Tools for SharePoint 2010

This morning I also demoed SharePoint tools. With Visual Studio 2012 RC, we’re delivering another compelling release for writing SharePoint 2010 solutions. We’ve developed a rich experience for creating SharePoint lists and content types, so that you no longer need to deal with the complex schema or error-prone hand-editing of XML. Our new SharePoint List Designer allows you to visually and accurately define new lists and content types:

We’re also working hard to make sure that you get the most accurate IntelliSense when working with SharePoint solutions. When developing a sandboxed solution, we now filter to the APIs that are available in production, so that you get immediate feedback on the right APIs to use. We’ve also augmented IntelliSense to parse JavaScript files that are stored in the SharePoint content database, and now provide IntelliSense for the functions and members in those files.

Visual Studio 2012 RC includes several enhancements for Office 365 development, where SharePoint solutions run in a sandboxed process. For example, the Visual Web Part template has been updated to be compatible with the sandbox and can now be safely deployed to Office 365. We’ve also introduced a new Silverlight Web Part template, in case you prefer to define your Web Parts in XAML. Finally, we’ve improved the experience of deploying sandboxed solutions with a new Publish dialog, which allows you to directly publish to Office 365 or any other remote SharePoint Server.

ALM support for SharePoint development continues to improve in the Visual Studio 2012 RC. We’ve expanded our profiling support so that you can get rich information about the bottlenecks in both farm and sandboxed solutions . I also announced previously that we’ll continue to add ALM support in the first Ultimate Feature Pack, which will feature unit testing support as well as support for SharePoint load testing.

Conclusion

I look forward to hearing from you as you have an opportunity to try out these features.

As of 6/26/2012 at 10:00 AM PDT, the four parts of the LightSwitch HTML Client Preview for Visual Studio 2012 were available for download by MSDN subscribers here:

The Details:

We are pleased to announce that the Microsoft LightSwitch HTML Client Preview for Visual Studio 2012 is now available for download. The preview provides an early look at our upcoming support for building cross-browser, mobile web clients with LightSwitch in Visual Studio 2012. The new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client that addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices.

With this download you will be able to set up a Virtual Hard Disk (VHD) for evaluation purposes that contains all you need to build, publish, and run touch-centric business applications using LightSwitch. The VHD contains a tutorial to help guide you through the available features.

As soon as I started writing LightSwitch applications I noticed that many times I was repeating the same code over and over for trivial tasks. So after all this time I have collected a number of extension methods that I widely use in my apps.
For me reusing code is a must and although the implementation of LS (IMHO) does not provide for this out of the box the underlying framework is ideal for writing extension classes and methods that are a major step towards code reusability. If you have downloaded any of my samples from msdn or have seen my Application Logo post, you already suspect I am an “extension method fanatic”.

So I will present a small series (I don’t know how small) of posts with extension methods from my Base.LightSwitch.Client library.

The first method is one of the first (if not the first one) extension methods I wrote. As soon as you want to override the code for default commands like Edit and Delete for a collection (let’s name it MyCollection) you have to write something like that:

This is the minimum code (it can be written more elegantly I know but this is the concept) you need to write. I don’t take into account the permissions issue.
A similar chunk of code has to be written for Edit.
Isn’t the code listed below easier to read:

This version is more concrete is generic and also does not have to do the (out of thin air) conversion of SelectedItem to IEntityObject. If you use this version though you have to change your partial method as you cannot extend null:

In the previous post I presented an extension method used mostly for overriding the edit and delete commands of a collection. One may ask “why do I want to do this?”. Apart from any other requirements/business logic dependent reason one might want to implement, for me there is one simple yet important reason: I don’t like at all (to be kind) the default add/edit modal windows when adding or editing an entry. It’s not a coincidence that the FIRST sample I wrote for LightSwitch and posted in the Samples of msdn.com/lightswitch was a set of extension methods and contracts to easily replace standard modal windows with custom ones.

Most of the times when I have an editable grid screen, selecting Add or Edit I DON’T want the modal window to pop-up, I just want to edit in the grid. Or in list and details screen I want to edit the new or existing entry in the detail part of the screen.

This is the main reason I most of the times override the default Add/Edit command behavior. And for this reason I created and use the next two extension methods. …

In the first half of 2011, Microsoft made a series of changes at the top of the team running Windows Azure, its cloud.

“A large group of new people came into the Azure team,” general manager Bill Hilf said at a Microsoft cloud event in London last week. “Satya Nadella came over, Scott [Guthrie] came over, I came over at the same time.”

Nadella is now president of server and tools, while corporate vice president Guthrie, co-inventor of ASP.NET, moved from his job running .NET technology.

The executive shuffle paved the way for an epiphany over the state of Windows Azure and ushered in a period of big changes for Redmond's cloud, Guthrie told The Reg in London during his trip last week for a couple of Windows Azure events.

“We did an app building exercise about a year ago, my second or third week in the job, where we took all the 65 top leaders in the organisation and we went to a hotel and spent all day building on Azure," said Guthrie.

"We split up everyone into teams, bought a credit card for each team, and we said: ‘You need to sign up for a new account on Azure and build an app today.’"

“It was an eye-opening experience. About a third of the people weren’t able to actually sign up successfully, which was kind of embarrassing. We had billing problems, the SMS channel didn’t always work, the documentation was hard, it was hard to install stuff.

“We used that [experience] to catalyze and said: 'OK, how do we turn this into an awesome experience?' We came up with a plan in about four to five weeks and then executed.”

The changes were fundamental. Azure now offers Amazon-like Infrastructure as a Service (IaaS). Previously, Azure virtual machines (VMs) were always stateless. Applications could write to the local drive or registry, but those changes could revert at any time.

The new Azure supports durable VMs alongside the old model. It also has a new admin portal based mostly on HTML rather than Silverlight; new command line tools; a new hosted website offering which starts from free; new virtual networking that lets you connect Azure to your on-premise network; new SDKs for .NET, Node.js, PHP, Java and Python; and performance features including a distributed cache and solid-state (SSD) storage.

What was required to enable stateful VMs?

“A lot of the work comes down to storage,” said Guthrie. “Making a VM work is relatively easy. Making it work reliably is hard. We’ve spent a lot of time on the storage system, architecting it so you could run VM disks and VM images off our storage system, which gives us much more scale, much more reliability, much more consistent performance.

“There was a lot of work at the networking layer. VMs want to be able to use UDP in addition to TCP. It was a pretty massive effort that consumed a lot of the last year. The result is an environment where you can literally stand up a VM and install anything you want in it.”

Azure supports Linux as well as Windows and multiple platforms including Java, PHP, Python and more. Why the proliferation?

“In a cloud environment, especially for enterprise customers, you don’t typically category shop," says Guthrie. "You’re not going to buy your load balancer from cloud vendor A, and VMs from cloud vendor B. Instead you are going to go to a vendor shop and pick a cloud platform, and run all your infrastructure in it. We can now be the vendor that someone bets on for the cloud.

“Office 365 and Azure run in the same data centres, so traffic between them is fast and secure. We can run any workload that an enterprise has: whether it is big data, whether it is Java app server, whether it is .NET, email, SharePoint – we got it.”

Azure supports resiliency – through availability sets that run on separate hardware in Microsoft’s data centres – and scaling, through a load-balancing service to which you can add VMs. Elasticity is not yet fully automatic, however.

“You could use our dashboard or you could use our command-line console app to spin-up or spin-down instances," says Guthrie. "We also have something called WASABI [Windows Autoscaling Application Block], which is a pre-built set of scripts that does that automatically. We support that with a pre-packaged project. Or you can just write your own... Long term you’ll see us add – directly in the portal – the ability to set up step functions based on load.”

El Reg asked Guthrie how Azure storage uses SSDs.

“It’s for journaling. It’s not so much storing your bits; it’s making sure that read and write operations are really smooth and fast. The biggest benefit is consistency. Writing an app to handle multi-second variance is hard. We try to have our standard deviation be low.”

Amazon price beater

Is Microsoft aiming to be price-competitive with Amazon? Guthrie prevaricated a little. “Our retail hourly prices I think are the same as Amazon’s. We are looking to be cost effective. More than price though, it’s really value of service. I don’t typically run into people saying cost is the biggest barrier to cloud, and those people include both Amazon and Azure customers. It’s more that it all fits together, there’s one REST API to manage it, you can use System Center, you can use a web portal, you can use any language. We’d like to be the Mercedes of the cloud business, as opposed to the cheapest.”

What are the implications for Microsoft’s partners as the company takes on more of its own cloud hosting?

“There is plenty of opportunity in the market for both us.” Guthrie insists. “We love the cloud, we love the server business. We make most of our money in the server business. The approach that we’re taking with Azure is that we want the two to work together.”

It's a good line, but it is hard to see how Microsoft can avoid cannibalising its own business. Then again, from Microsoft’s point of view, better to cannibalise that business than see it go to Amazon.

The Windows Azure Service Dashboard reported Limitations on Compute and Storage Accounts for New Users of the North Central or South Central US Data Centers in late May 2012:

North Central US and South Central US regions are no longer accepting Compute or Storage deployments for new customers. Existing customers as of June 24th (for North Central US) and May 23rd (for South Central US) are not impacted. All other services remain available for deployment, and new regions "West US" and "East US" are now available to all customers with the full range of Windows Azure Services.

Editor’s Note: This post was updated on May 29, 2012 to reflect availability of SQL Azure in the “West US” Region.

People’s ears usually perk-up when they hear Windows Azure uses more server compute capacity than was used on the planet in 1999. We are excited and humbled by the number of new customers signing up for Windows Azure each week and the growth from existing customers who continue to expand their usage. Given the needs of both new and existing customers, we continue to add capacity to existing datacenters and expand our global footprint to new locations across the globe.

To anticipate the capacity needs of existing customers, we closely monitor our datacenters capacity trends. To ensure customers can grow their usage in datacenters in which they are already deployed, datacenters that hit certain thresholds are removed as options for new customers. Today, we are removing compute and storage services as options for new customers in the South Central US region. Existing customers already deployed into South Central are not impacted. SQL Azure, Service Bus, Caching, and Access Control remain available in South Central to new customers.

As we announced in a recent blog post, two new US datacenter options ("West US" and "East US") are available to Windows Azure customers. Today we are announcing the availability of SQL Azure in the "East US" and "West US" Regions to complement existing compute and storage services.

We appreciate the incredible interest our customers are showing in Windows Azure, and will communicate future news around our growing footprint of global datacenters as new options come online. As always, the best way to try Windows Azure is with the free 90-day trial.

Mark Brown (@markjbrown) of the Windows Azure Team described in a 6/28/2012 message an Update to Storage/Transactions benefits for Azure Insiders (Azure Pass) and MSDN subscribers, as well as clarified restrictions on creating new Cloud Services in Microsoft’s North Central or South Central US data centers:

So all customers will now get the following, depending on their offer:

90-day Free Trial / Azure Pass

MSDN Professional / Cloud Essentials

MSDN Premium

MSDN Ultimate / BizSpark

Storage

35GB

35GB

40GB

45GB

Storage Transactions

50M

50M

75M

100M

Customers with a presence in North Central or South Central US Datacenters can add to their subscription that includes the use of service in North/South Central or create new subscriptions and deploy to North Central.

I would have appreciated a bump in the number of free hours for Cloud Services, also.

In this tutorial we are going to see how to programmatically connect to the Windows Azure Media Services using the Windows Azure Media Services SDK for .NET application development. In our earlier articles we have seen What is Windows Azure Media Services () and What are the steps to configure the Windows Azure Media Services Account (). Now in this tutorial we will write some code using the Visual Studio 2010 IDE and see how to connect to the Windows Azure Media Services and create a Cloud Context which is the key which holds all the necessary information of the entities that are used with the Azure Media Services from the application development perspective.

Open Visual Studio 2010 IDE and create a new Windows Application project or a WPF project with a valid project name, which be used in this series to explore the Windows Azure Media Services core features one by one as shown in the screen below.

Now let us design the page with some controls which basically required to connect to the Windows Azure Media Services using the CloudMediaContext class. The Server context provides the complete access to all the entities that are required to access the media objects like assets, files, jobs, tasks etc. Once we designed our screen it looks like below.

Next step is to add the Media Services reference, we can see the reference dll (Microsoft.WindowsAzure.MediaServices.Client.dll) available on the location of the SDK installed in the development environment i.e. C:\Program Files (x86)\Microsoft SDKs\Windows Azure Media Services\Services\v1.0 as shown in the screen below.

Next step is to add a APP.Config file where we are going to provide the Account Name and Account Key as the configuration which can be changed later based on the needs as shown in the screen below.

Now in the code behind declare a private variable and get the account name and the account key which can be used while creating an instance of the CloudMediaContext as shown in the code below. CloudMediaContext class provides the complete details of the entities that can be used in the application to manipulate the media object to its needs.

Code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Configuration;
using Microsoft.WindowsAzure.MediaServices.Client;

Now we are done with the code, we can build and execute the project, we can see the application run successfully without any errors. We will not see any expected output as we are not catching any of the details to show as an output, but yes we have created a CloudMediaContext which has all the entities that can be utilized as per the requirement which we can see the list of entities available using the debugging mode as shown in the screen below.

So here in this tutorial we have seen how to programmatically connect to the Windows Azure Media Service and create a context holding the entities that are used to manipulate the required media objects as per the requirement.

In our earlier article we have seen about what is Windows Azure Media Services and the different terminologies and operations that are involved with the same. Here in this tutorial we are going to see how to set up the Windows Azure Media Services Preview account to start developing the application with the Visual Studio 2010 IDE and deploy to the Azure environment. To get clear idea on the Windows Azure Media Services first I would suggest to get in with this article which describes clearly on the basic of the Media Services step by step “Getting Started with Windows Azure Media Services – #Meet Azure Edition”. [Link to earlier article added.]

Currently we have Windows Azure Media Services is in Preview and in order to use the environment we need to do some steps which provides us with the required keys basically the Account Key which is required to start using the Media Services with the code base.

Step 1 – Sign in to Windows Azure Portal using the link http://Windows.Azure.com with a valid subscription and register for the Media Services preview which will be available under the Preview Features section of Account tab as shown in the screen below. Since my subscription is already subscribed we will see You are active, else we can see a default message of Try it now.

Step 2 – Click on Try it Now will post the request to the Windows Azure team and we can see the request is queued, once we get the approval mail and status shows You are Active we can start using the Media Services from the code. Next step is to see if all the prerequisites are installed correctly (See the prerequisites specified in this article “Getting Started with Windows Azure Media Services – #Meet Azure Edition”). Install the missing software in order to avoid any interruption which setting up the Window Azure Media Services Account.

Step 3 – Start creating a Windows Azure Storage Services (Basically used to store the media content) within the list of regions where Windows Azure Media Services are available. The regions involved are (West Europe, Southeast Asia, East Asia, North Europe, West US, East US). To see the steps on how to create the Windows Azure Storage Services refer to the article “Windows Azure – Creating New Storage Account”. Once the storage is created we can see the storage listed as shown in the screen below.

Step 4 – Now install the Windows Azure Media Services SDK which can be downloaded from the link “Windows Azure Media Services SDK 1.0”. As a part of Prerequisites if the SDK is installed please leave this step and proceed to the next step.

Step 5 – Open Windows PowerShell v2.0 or greater (if you are proceeding with the setup on a Windows 8 Machine then it has inbuilt PowerShell v3.0 installed) so open the PowerShell ISE in administrator mode by right clicking and selecting “Run as Administrator” which opens the PowerShell in Administrator mode as shown in the screen below.

Step 6 – Next step is to change the directory to the path where we installed the Windows Azure Media Services SDK, use the below script to change to the path. Note to keep the path in “” so that you will not get error.

Step 6 – Next step is very important that we are going to activate the Windows Azure media Account using the below script, this script first creates a Management Certificate internally and uploads to the Media Server once the script is executed. If the script is executed correctly we can see a new Browser opened with the steps on how to proceed after installation as shown in the screens below.

Once things are done correctly we can see a file (PublishSettings) prompted to download to the local machine, this file has the necessary information which are basically required while retrieving the account key. So Save the file to the local machine and the file contains the the management certificate details as shown in the screen below.

Step 7 – Next step is to get the endpoint information to which the service is pointed to, so use the below script (which has the path to the downloaded Management Certificate) and execute it in the Windows PowerShell as shown in the screen below. On successful execution we will be getting the management service endpoint, certificate thumbprint, and the subscription Id of the Windows Azure account.

Step 9 – Next step is to check on which particular region we are going to create the account, basically now Windows Azure Media Services are available in few of the regions where Microsoft keeps on working to increase the availability zones one by one. To get the list execute the below script and we can see the result listed, select one region from the list and keep it aside as shown in the screen below.

Step 10 – This step is important, before we proceed with giving a Media Service account name for our application first we need to check if the Media Service name is available. Since this is globally available there is a possibility some one from different region can used the name, so to check if the Account Name is available or not run the below script with your favorite name in the string as shown in the script and screen below. We will get result as True or False based on the availability.

Step 11 – We can see the Account Name is available, now we are ready with all the required information. Execute the below script which creates the account with the Windows Azure Media Services as shown in the screen below.

Step 12 – On providing the information correctly and the scripts executed without any errors we can see the account gets created and we can see the Account ID and the Subscription details as shown in the screen below.

[Image missing]

Step 13 – Now we need to retrieve the Account Key using which only we will be connecting to the Media Services from the code behind, to get the media services Account key run the below script as shown in the screen below. (Need to execute both the scripts)

Now we are done with all the necessary steps to register the account and as well got the Account Name and Account Key which are used to connect to the Windows Azure Media Services from the code to programmatically do the manipulations as per the requirement. We will use these details and connect to the media server from the code in our next tutorial. Until then Happy Programming!!!

A lot of my topic deals with Windows Azure SQL Database (the database previously known as "SQL Azure"), and especially Federations. In that light, I want to call out a great video of a presentation on Federations by Cihan Biyikoglu at the 2012 US Tech Ed conference in Orlando. Cihan's blog is also a great source of information about Federations.

Federations illustrates an interesting paradox concerning Windows Azure performance. Individual operations in Windows Azure are often slower than the corresponding operations on an in-premises server, for a number of reasons, including latency, and automated fail-over (nodes, including persistent data, are generally replicated a number of times: this takes time, and if a node goes down, and fails over to a secondary node, this also takes time). But this can be more than compensated for by the scaling out of resources, and Federations is an outstanding example of this. The result can be a massive parallelization of work, which can greatly increase performance.

While database scale-out is the commonest example, the same techniques can be applied to other Windows Azure resources.

Another important aspect of Windows Azure performance is the need to test your architectural design by early-on creating a proof of concept application. Windows Azure contains a lot of "moving parts", and it is in a state of rapid evolution and development. Your exact combination of parts has not necessarily been tested by Microsoft (a Computer Science class in Combinatorics will clarify why this is (hint: look at the factorial function...)), and you should validate your architecture early in the development cycle, rather than trying to fix perf problems when about to deploy your application, or even worse, after it's in production (this has of course never happened to me :-)).

Finally, Eric Lippert has a video where he talks briefly about performance as an engineering discipline. Actually most of the video is about C#'s new features and also about Roslyn... But his remarks about performance are worth hearing.

I want to give a shout-out to another topic in the Best Practices series, that was published along with the Performance and Trouble-Shooting ones: Windows Azure Security Guidance.

Finally, as always I welcome feedback! Windows Azure performance is a vast area, and there are undoubtedly areas that could use more detail. And it is a continuously evolving platform, so new issues are likely to appear.

Gartner just released their 2012 Magic Quadrant for x86 Server Virtualization Infrastructure*, and I am very happy to report that Microsoft is listed as a leader. You can download the full report at the link above, and I hope you read the research in detail. The report reviews Hyper-V in Windows Server 2008 R2 – keep in mind, the virtualization and private cloud capabilities in Windows Server 2012 are even better! You don’t have to wait for the general availability of Windows Server 2012 later this year. You can download the release candidate right now.

Last year marked a significant milestone. There are now more virtualized operating systems installed globally than there are non-virtual instances.

But this is just the beginning. We know that IT leaders want to move beyond virtualization and need the flexibility to use capacity from multiple clouds. So at our MMS conference in April, I announced the availability of Microsoft System Center 2012, a solution that lets you manage your applications wherever they are, across physical, virtual, private cloud and public cloud environments. Together, System Center 2012 and Windows Server are optimized to help businesses explore cloud computing easily and more affordably, making cloud computing approachable for almost all organizations. At MMS, one of my favorite demos showed how System Center 2012 can set up a basic private cloud infrastructure using existing servers in less than a minute. That’s fast.

I encourage you to read this article by our Director of Product Marketing Edwin Yuen, which will go into even more detail on reasons to choose Hyper-V for your virtualization and private cloud infrastructure.

*Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Pricing for virtual machine resources

Google Compute Engine currently offers the following 4 machine types. We will be offering additional configurations in the future including smaller types to help developers get started easily, as well as larger types to support more powerful scaling of applications.

Configuration

Virtual Cores

Memory

GCEU*

Local Disk

Price/Hr

$/GCEU/ Hr

n1-standard-1-d

1

3.75GB **

2.75

420GB **

$0.145

0.053

n1-standard-2-d

2

7.5GB

5.5

870GB

$0.29

0.053

n1-standard-4-d

4

15GB

11

1770GB

$0.58

0.053

n1-standard-8-d

8

30GB

22

2 x 1770GB

$1.16

0.053

* GCEU is Google Compute Engine Unit -- a measure of computational power of our instances based on industry benchmarks; review the GCEU definition for more information
** 1GB is defined as 2^30 bytes …

GCEU (Google Compute Engine Unit), or GQ for short, is a unit of CPU capacity that we use to describe the compute power of our instance types. We chose 2.75 GQ’s to represent the minimum power of one logical core (a hardware hyper-thread) on our Sandy Bridge platform.

Today we’ll be demoing RightScale managing a deployment on Google Compute Engine during the launch presentation at Google I/O at 1:30pm (PT). With the release of Google Compute Engine, the year 2012 is becoming a turning point in the evolution of cloud computing. There are now multiple public megaclouds on the market, and public cloud computing is set to become the dominant form of business computing (mobile arguably becoming the dominant form of consumer computing). I’ll come back to why I am convinced of this at the end, but first let’s focus on Google Compute Engine.

We’ve been working for months with the team at Google building out Google Compute Engine to ensure that everything is ready for our customers to leverage it. We realized very quickly that Google Compute Engine is an all-out effort to build a world-class cloud on one of the most awesome global computing infrastructures.

It also became clear that Google Compute Engine is comparable to the most successful infrastructure clouds in the market but not a clone in any way. The team at Google has leveraged the depths of Google’s engineering treasure trove to bring us their take on how a cloud platform ought to look. Yes, this means that Google Compute Engine is not API compatible with any other cloud. Yes, it also means that resources in Google Compute Engine behave slightly differently from other clouds. However, to RightScale users this will not be an obstacle as our platform takes care of the API differences and our ServerTemplates accept and even leverage the more important resource differences. We actually welcome these differences. …

And continues with a 00:04:02 video featuring Thorsten and a signup offer for a private beta:

Overall, Google Compute Engine has been a pleasure to work with, which is perhaps best summed up by RightScale customer Joe Emison who says, “[we] have found the performance of the Google Compute Engine VMs to be the most consistent of any other virtualized architecture we’ve used.” Joe is VP of Research and Development at BuildFax and a long-time RightScale customer who helped us test drive Google Compute Engine. We now look forward to onboarding many more customers, and invite you to sign up for the Google Compute Engine with RightScale private beta. …

The post contains some interesting commentary about Google’s new IaaS offering. I’ve signed up for the beta.

But that wasn’t plan A. Google was way ahead of everybody with a PaaS solution, Google App Engine, which was the embodiment of “forward compatibility” (rather than “backward compatibility”). I’m pretty sure that the plan, when they launched GAE in 2008, didn’t include “and in 2012 we’ll start offering raw VMs”. But GAE (and PaaS in general), while it made some inroads, failed to generate the level of adoption that many of us expected. Google smartly understood that they had to adjust.

“2012 will be the year of PaaS” returns 2,510 search results on Google, while “2012 will be the year of IaaS” returns only 2 results, both of which relate to a quote by Randy Bias which actually expresses quite a different feeling when read in full: “2012 will be the year of IaaS cloud failures”. We all got it wrong about the inexorable rise of PaaS in 2012.

But saying that, in 2012, IaaS still dominates PaaS, while not wrong, is an oversimplification.

At a more fine-grained level, Google Compute Engine is just another proof that the distinction between IaaS and PaaS was always artificial. The idea that you deploy your applications either at the IaaS or at the PaaS level was a fallacy. There is a continuum of application services, including VMs, various forms of storage, various levels of routing, various flavors of code hosting, various API-centric utility functions, etc. You can call one end of the spectrum “IaaS” and the other end “PaaS”, but most Cloud applications live in the continuum, not at either end. Amazon started from the left and moved to the right, Google is doing the opposite. Amazon’s initial approach was more successful at generating adoption. But it’s still early in the game.

As a side note, this is going to be a challenge for the Cloud Foundry ecosystem. To play in that league, Cloud Foundry has to either find a way to cover the full IaaS-to-PaaS continuum or it needs to efficiently integrate with more IaaS-centric Cloud frameworks. That will be a technical challenge, and also a political one. Or Cloud Foundry needs to define a separate space for itself. For example in Clouds which are centered around a strong SaaS offering and mainly work at higher levels of abstraction.

A few more thoughts:

If people still had lingering doubts about whether Google is serious about being a Cloud provider, the addition of Google Compute Engine (and, earlier, Google Cloud Storage) should put those to rest.

Here comes yet-another-IaaS API. And potentially a major one.

It’s quite a testament to what Linux has achieved that Google Compute Engine is Linux-only and nobody even bats an eye.

In the end, this may well turn into a battle of marketplaces more than a battle of Cloud environment. Just like in mobile.

I batted my eye when I learned that Google Compute Engine was Linux-only. That cuts the number of potential enterprise users by a substantial percentage.

Today at Google I/O we were pleased to announce a new service, Google Compute Engine, to provide general purpose virtual machines (VMs) as part of our expanding set of cloud services. Google App Engine has been at the heart of Google’s cloud offerings since our launch in 2008, and we’re excited to begin providing developers more flexible, generalized VMs to complement our fully-managed, autoscaling environment.

App Engine has been growing rapidly since leaving preview, and we’re excited about the benefits that Google Compute Engine brings to developers who want to combine the advantages of App Engine’s easy-to-use, scalable, managed platform with the flexibility of VMs.

If you are interested in using VMs with your App Engine applications in the future, let us know by signing up here.

Signed up, but I’m not sanguine about my chances of being onboarded quickly. I also bought a 16GB Nexus 7 out of curiosity and to compare with the forthcoming Windows Surface tablet when it becomes available for purchase.

Stay tuned to the OakLeaf blog for more posts about the Google Cloud Platform and Google Compute Engine today and over the weekend.

Each release is special in its own way, but this time we can’t help but be extra proud. From San Francisco to Sydney we’ve taken an extra week to pack in some of our most widely requested features and prepare a host of talks and announcements for Google I/O.

We’ll be bringing you more information about this release and the future of Google App Engine platform, as well as some exciting announcements from our I/O YouTube live stream. We’ll also be posting highlights from I/O on our blog and Google+, so tune in here for updates the rest of this week.

This allows multiple domains to share the same IP address while still allowing a separate certificate for each domain. SNI is supported by the majority of modern web browsers. SNI is priced at $9/month which includes the serving of 5 certificates.

Virtual IP (VIP):

A dedicated IP address is assigned to you for use with your applications. VIP is supported by all SSL/TLS compatible web clients and each VIP can serve a single hostname, wildcard or multi domain certificate. A VIP will cost $99/month.

Google App Engine’s additional location - the EU
For the past four years, App Engine applications have been served from North America. However, we understand that every ms of latency counts so we’ve turned up an App Engine cluster in the European Union so that our developers with customers primarily in Europe can have confidence that their site will look as fast as they’ve designed it.

PageSpeed - Making the Google App Engine Powered Web Faster
At Google we’ve had an ongoing commitment to making the web faster and for almost a year the PageSpeed Service team has been enabling websites to optimize their static content for delivery to end users at lightning fast speed. Today we’re making this service available to our HRD applications with just a few clicks. Use of the PageSpeed Service is priced at $0.39 per GB of outgoing bandwidth, in addition to standard App Engine outgoing bandwidth price.

GeoPoint Support in Search
Our Search team deserved a break after releasing the Search API a month and a half ago, but instead they’ve worked hard to announce exciting improvements at Google I/O. You can now store latitude and longitude as a GeoPoint in a GeoField, and search documents by distance from that GeoPoint.
Other Service Updates

Here are some other amazing updates we have this release:

Blob Migration Tool now Generally Available - Since the deprecation announcement for Master/Slave Datastore (M/S), we’ve been continually improving the experience for apps migrating from M/S to HRD. We’re happy to announce that the Blob Migration tool is now generally available, so you can migrate both your Blobstore and Datastore data in one easy step.

Application Code Limits Raised from 150MB/version to 1 GB/application - For those of you biting your fingernails every time you update your application, wondering if today will be the day you finally reach the 150MB application version limit, fret not! We’ve updated the application size limit to be 1GB total for all versions of your application. You can check your app’s Admin Console to see the total size of all your application versions. In the future, you’ll be able to purchase more quota to store additional files.

Logs API Updates - Paid applications will now be able to specify a logs retention time frame of up to 1 year for their application logs, provided that the logs storage size specified is sufficient for that period. Additionally, we’re introducing some Logs API billing changes so that you can pay to read application logs after the first 100MB. Reading from the Logs API will cost $0.12/gigabyte for additional data over the first 100MB.

Go SDK for Windows - We’ve published an experimental SDK for Windows for the Go runtime.

Don’t think these are all the new features we’ve introduced with 1.7.0; we’ve got so much more than just the highlights above. Make your way to our release notes for Java, Python, and Go straightaway to read about 1.7.0. If you have any feedback, we’d love to hear it in our Google Group. We and the whole Stack Overflow community for App Engine have been working hard to answer all your technical questions on the App Engine platform.

Google is holding their annual I/O Conference at San Francisco’s Moscone Hall on 6/27 through 6/29/2012. Here’s the schedule of sessions related to App Engine in the Cloud Platform track:

Rumor has it that Google will announce it’s entry into the Infrastructure as a Service (IaaS) cloud provider sweepstakes during Wednesday’s keynote, which will be streamed live starting at 9:30 AM on 6/27/2012.

Most of the noise coming out of Google I/O this week will be around the company’s long-percolating infrastructure as a service plan. But many developers who have banked on Google App Engine, the company’s platform as a service, will be looking for other things.

Microsoft has always understood that to win a platform war, you must engage the developer community. More than engage: Energize. Empower. Aggressive support for developers, through great tools, outstanding technical guidance and marketing assistance, propelled Windows past OS/2 so many years ago. It’s how Microsoft has remained the desktop leader for so many years.

But the world has changed, and in important new spaces (smartphones, tablets and the cloud), Microsoft clearly is lagging. Not only are consumers voting for non-Microsoft products, but developers are as well.

Take phones. Apple and Google, with iOS and Android, have raced out of the starting gate and left Microsoft plodding, well up the track and way off the pace of smartphones and tablets. Microsoft will need a game-changer here to make headway, as the number of apps available on the other platforms far exceeds apps available for Windows Phones, which has finally reached 100,000. But it’s more than quantity. Many popular apps simply aren’t on Windows Phone 7.5.

Developers clearly are elsewhere. Meanwhile, Amazon and Heroku, among others, are off and running into the cloud, with applications written for those platforms already deployed and working. [*]

Microsoft, of course, has responded. At the recent TechEd conference, high-level executives gave presentations for Windows Azure and Windows Server 12, claiming the two make up what the company is calling “the cloud OS.” Windows 8, the new operating system that brings Metro app styling to the desktop and tablet, was demoed on a Samsung device and looked as if it will be competitive in the tablet arena.

(Since then, of course, Microsoft unveiled its own tablet, called Surface, marking the company’s move into hardware as well as software. Surprisingly, it didn’t make the announcement four days earlier, when more than 10,000 dedicated Microsoft administrators and developers were gathered in Orlando, waiting for some kind of direction.)

What struck us most about TechEd was the lack of meat for developers, and we’re not just talking about the assembly-line lunches.

In an interview with Visual Studio honcho Jason Zander after the second-day keynote, we were told that among the biggest takeaways for developers was the news that LightSwitch, a RAD tool for Web development, now supports HTML5.

That’s it?

We even got a chuckle from the fellow who walked up to the SD Times booth at TechEd, noticed the June issue of the magazine, and shouted delightedly, “At last! I’ve found something here for developers.”

Microsoft should not lose sight of “who brung ‘em” to the big dance. If developers aren’t writing applications for the company’s phones, tablets, laptops and desktops, there’s nothing for end users to use, and whenever possible they will pick up other devices.

The software giant still has an opportunity to compete. The Surface device has generated a good deal of early buzz and even drove the stock price up for a day.

But will that displace the iPad?

The Windows Phone situation is murkier. To win against iOS and Android (and, we suppose, BlackBerry 10), Microsoft will need the ISV Army working overtime to create the compelling apps that will power Metro tiles and take advantage of all that back-end cloud connectivity displayed at TechEd.

Finally: Two big unknowns are Windows 8 and Internet Explorer 10. Will developers embrace the new version of Windows? And will they customize their websites for Microsoft’s latest browser? Much depends on whether Microsoft can energize them. Based on what we saw in TechEd, we are not optimistic. …

Anybody who was there knows the demo in the talk I gave timed out. So I’ve done it again and added it to the talk using the magic of video editing!

With all the excitement around the IaaS features such as Virtual Machines and Virtual Network, and then the added cool of Azure Websites, it seemed to me the very basis on which Windows Azure had made its success so far – as a PaaS platform was left behind.

In this talk, I show how to get started writing apps, using the local compute emulator and then deploying them to staging and eventually on to production. I use VS2012/Windows 8, but the principles are the same for VS2010 and Windows 7. I show how to code up multi-instance web role and worker role app that uses blob storage and load balances to the back-end using queues.

Suddenly the clouds opened and angels sang….at least that was the sense I got at the TechEd Developer’s conference last week.

You see, one of the huge problems IT has been having is users breaking out their credit card and using it to get hosted services from non-compliant vendors for critical line projects. The IT-approved and certified services are simply too difficult to get and set up so the users just dance around them and go web shopping.

Well, Microsoft got IT’s attention when they launched their Azure Hybrid Cloud offering because it appears to not only be better certified than other services and more easily integrated, it is also easier to setup and use.

In fact one story I was told at the show was about a bunch of Linux folks working on a major collaborative project using this service and having it fully provisioned and running in under 15 minutes. I was also told I couldn’t write about it in detail because none of them wanted to get hate mail. But when hard-core Linux folks start to prefer a Microsoft service this becomes a “man bites dog” story and vastly more interesting.

The Service

Currently in beta mode, this service consists of Windows Azure Virtual Machines, Windows Azure Virtual Network, Windows Azure Web Sites, and – omg – that’s enough Azure already. It has a web front end, critical for a tool like this, and it covers a variety of popular tools like Word Press and a large number of platforms, including an impressive selection of Linux distributions. Not Red Hat, though they are trying to work out an agreement with them. (Red Hat and Microsoft haven’t historically been the best of buddies).

Whether it is setting up a web site or extending a premise resource into the cloud because of emergency capacity issues, setup and use is amazingly easy. I say that because I haven’t done this kind of work for years and I found I could figure out most things very quickly (guys don’t like to ask questions and we have really short attention spans at my age). I’ve actually seen games that were harder to set up and play.

Now the first ten instances are free, though these are shared so performance could be iffy and eventually there may be a minor charge for this entry service (that hasn’t been worked out yet). And you can easily provision for extra resources and services (with a fee) if you need more. But for small projects, the kind that employees are currently using credit cards for, this could be ideal and certainly worth checking out.

There is an interesting case study on the Azure-based Harry Potter site and if you are interested in security and compliance the details for that are available. Some of the audit and certification standards that have been met are ISO/IEC 27001:2005, and SSAE 16/ISAE 3402 Attestation, for example.

Wrapping Up: And Angels Sang

The positive reaction this service got was impressive, for a Microsoft event, it was almost as if the audience suddenly heard angels singing. As noted, this service is still in beta. But for a bunch of IT folks struggling to avoid major compliance problems associated with employees bypassing IT, the offering of an approved, secure, and easy to use service – that works with their Microsoft infrastructure – must have been like words from heaven.

Surprisingly enough, a bunch of folks were actually similarly excited about Windows 8 on tablets, which would address similar concerns about out of control and non-compliant platforms and services flowing into companies.

This may have been the most exciting TechEd in years, who knew? I guess sometimes Christmas comes in June.

Rob is one of the most experienced and trusted computer industry analysts. I remember his work during my use of MS-DOS 2.0 after my company migrated to Wintel from Commodore CBMs as business computers.

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced it has expanded its Red Hat Enterprise Linux Developer Program with enhancements to its Developer Suite, including a new toolset for software developers worldwide. Through the Red Hat Enterprise Linux Developer Suite, Red Hat delivers the latest, stable open source developer tool versions at an accelerated cadence than that of Red Hat Enterprise Linux. Developers now have access to a robust suite of tools with synchronized availability on Red Hat Enterprise Linux and Red Hat OpenShiftTM, allowing developers to deploy applications freely to either environment.

“For Linux programmers, having ready access to the latest, stable development tools is key to taking advantage of new Linux advancements,” said Jim Totton, vice president and general manager, Platform Business Unit, Red Hat, Inc. “The Red Hat Enterprise Linux Developer Program makes it easy for developers to access industry-leading developer tools, instructional resources and an ecosystem of experts to help Linux programmers maximize productivity in building great Red Hat Enterprise Linux applications.”

Designed for many types of Linux developers, including Independent Software Vendors (ISVs), software solution providers, Systems Integrators (SIs), enterprise, and government software developers, the Red Hat Enterprise Linux Developer Suite enhances developer productivity and improves time to deployment by providing affordable access and updates to essential development tools. The latest, stable tooling can be used to develop applications on Red Hat Enterprise Linux whether on-premise or off-premise in physical, virtual and cloud deployments, and on OpenShift, the leading open Platform-as-a-Service (PaaS).

Red Hat Enterprise Linux Developer toolset, a collection of development tools to create highly scalable applications. Delivered as part of the Developer Suite, Red Hat plans to accelerate the release cadence of these tools to deliver the latest, stable open source developer tool versions on a separate life cycle from Red Hat Enterprise Linux releases.

The first version of the Red Hat Enterprise Linux Developer Suite includes a toolset that makes developing Linux software applications faster and easier by allowing users to compile once and deploy to multiple versions of Red Hat Enterprise Linux. Using the developer toolset, software developers can now develop Linux applications using the latest C and C++ upstream tools. These tools include the latest GNU Compiler Collection (GCC 4.7) with support for C and C++; the latest version of the GNU Project Debugger (GDB 7.4) with improvements to aid the debugging of applications; and the GNU binutils collection of binary developer tools, version 2.22, for the creation and management of Linux applications.

“The velocity of development is as high today as it has ever been, which means that developers are putting a premium on a toolchain that is current from libraries to compiler,” said Stephen O’Grady, Principal Analyst with RedMonk. “With its expanded Red Hat Enterprise Linux Developer Program and toolset, Red Hat aims to provide developers with just that.”

David Linthicum (@DavidLinthicum) asserted “Up against overly complex services from Amazon.com, Microsoft, and Rackspace, Google could strike gold by simplifying” as a deck for his Google: The great hope for IaaS post of 6/26/2012 to InfoWorld’s Cloud Computing blog:

It's almost a certainty that Google will announce an enhanced IaaS offering at its developer conference this week in San Francisco. Most industry analysts, and yours truly, have been expecting this move -- and hoping it would happen. It will build on Google's existing PaaS product and Google App Engine, as well as Google storage services.

This is a sound decision on Google's part. It needs to provide an IaaS option that supports its popular PaaS offering to achieve parity with both Amazon Web Services and Microsoft's combo of Azure and Office 365. But it could have a benefit beyond the competitive landscape: It could help simplify the overly complex IaaS market.

If the Google offering is easier to use than existing IaaS wares, such as those provided by Amazon.com and Rackspace, Google may finally find a way to penetrate the large enterprise market that has largely pushed back on the use of public IaaS.

The horsepower of the Google brand name combined with an IaaS setup built more for line managers than developers could address a need that's unmet today: the ability to quickly provision storage and compute resources, as well as migrate to the resources and provide turnkey management.

Certainly, IaaS offerings from Amazon.com and Rackspace are powerful. But they're daunting for nongeeks. A less technical IaaS from Google could help small businesses that have few IaaS offerings they can afford and actually handle. And it could help enterprises adopt IaaS more quickly by getting IaaS out of the IT project queue and into local business units' laps.

Google could provide the path of least resistance to IaaS for both small businesses and enterprises. When the formal news hits later this week, that should be the yardstick by which to measure the offering.

I’m not sanguine about the prospect of business unit managers (BUMs?) setting up and using virtual machines in anyone’s cloud, although Microsoft’s new Windows Azure Management Portal makes its easier than it once was.

The dual Web role application has been running in Microsoft's South Central US (San Antonio) data center since September 2009. I believe it is the oldest continuously running Windows Azure application.

About Me

I'm a Windows Azure Insider, a retired Windows Azure MVP, the principal developer for OakLeaf Systems and the author of 30+ books on Microsoft software. The books have more than 1.25 million English copies in print and have been translated into 20+ languages.

Full disclosure: I make part of my livelihood by writing about Microsoft products in books and for magazines. I regularly receive free evaluation software from Microsoft and press credentials for Microsoft Tech•Ed and PDC. I'm also a member of the Microsoft Partner Network.