Microsoft Enhances High Performance Computing for Windows Azure

Microsoft announced a couple of high-performance computing (HPC) enhancements this month for its Windows Azure Cloud Services and Windows Server 2012 R2 products.

The Windows Azure Cloud Service now offers two new virtual machine sizes, which Microsoft describes as "compute-intensive instances." The new A8 instance provides support for eight virtual cores with 56 GB of RAM, while the A9 instance supports 16 virtual cores with 112 GB of memory. Both use a 32-gigabits-per-second InfiniBand network for "low-latency and high-throughput communication," according to Microsoft's announcement.

The new compute-intensive instances might be used organizations that need to scale their high-performance computing clusters by tapping Windows Azure resources. They are designed for running compute-intensive workloads, such as modeling and simulation tasks.

Right now, access to the new compute-intensive instances is limited. Microsoft didn't indicate details about availability, but it is gradually building out the capability across its Windows Azure service regions. The compute-intensive instances also are planned for use with Microsoft's Windows Azure Infrastructure Services, which is Microsoft's infrastructure-as-a-service offering for running virtual machines.

Microsoft also cautioned that the compute-intensive instances can't presently be used outside their own "affinity group." Affinity groups are a way to group certain Windows Azure services to increase their performance, such as locating the data and code portions within the same cluster, according to this MSDN library description.

The second HPC enhancement announced by Microsoft is the availability of HPC Pack 2012 R2. This free update adds support for some node roles on Windows Server 2012 R2 and Windows 8.1. It also adds integration with Windows Azure in terms of job scheduling and cluster management. Additionally, it supports running parallel MPI (message passing interface) applications on Windows Azure via the new A8 and A9 compute-intensive instances, according to a TechNet library document.