Virtual Machine Manager (VMM) is my favorite part of the System Center family; it’s not a beast to install (looking at you Configuration Manager), nor does it take hours to fine-tune until it’s actually useful (that’s you Operations Manager). VMM is relatively easy to get going, and it provides value even with a basic setup through VM templates, host and cluster management, and a library to store templates, scripts, and applications.

It’s also more capable with each release, and it now manages your storage fabric (both Microsoft and SANs/ NASs), your network fabric (virtual as well as physical top-of-rack switches through Open Management Infrastructure (OMI)), your compute fabric (VMware and Hyper-V), and of course all the VMs. VMM is a bit like Borg technology, taking control of everything; a more appropriate name would be Datacenter Manager.

When Microsoft released the Technical Preview of Windows Server (I’ll call it Server vNext), it also released Technical Preview versions of most of the System Center products. These are even more “alpha” code than Server vNext, and really the only scenarios that can be tested are compatibility with Server vNext and SQL 2014. The advantage of these early releases is that testers can actually have an impact on what features end up in the final version, unlike with normal “almost finished” previews. Word on the street is that Microsoft is listening to users more than ever.

Overall in System Center we know more about what’s not coming than we do about what is: App Controller is no more (replaced by the infinitely more capable Windows Azure Pack); Server App-V (a part of VMM that no one used) and IT GRC Process Management pack (again, something nobody used). Notable on the VMM front is that Citrix Xen Server is no longer supported; only VMWare vCenter 5.5 and 5.8 (4.1 and 5.1 support bit the dust too). Clearly Microsoft sees the virtualization race as a two-horse game at this stage. Full release notes can be found here.

Storage Management (SM) API is getting a makeover in the next version of VMM and NAS devices are now supported natively (in 2012 R2 there was a special mode called Pass Through to allow VMM to manage NASs).

You can now use VMM to classify local storage in your Hyper-V hosts just like you can create service classes of SAN / SOFS storage today (bronze, silver, gold for example). VMM will also manage Shared Nothing storage, the new take on file server clusters that uses internal disks in each host instead of a shared SAS fabric. Configuration of storage tiering and deduplication can now be controlled from VMM; previously it had to be created on the file server cluster side.

Adding Local Storage in VMM

I’m keen to see two blockbuster features that are coming to VMM but are not in the current TP: the central policy engine and GUI for managing Storage QoS and the GUI for the new Network Controller. The storage QoS policy engine is a clustered resource itself, so it can fail over between nodes.

Storage QoS policies can be set in tiers with parent-child policies for exception VMs; VMM will tag policies so that a Hyper-V host that’s moved from one cluster to another can pick up the right policies. Storage Replica, the new generic, block-level replication engine in Server vNext, can also be managed by VMM vNext.

The only new feature that’s been demonstrated for VMM is Consistent Device Naming (CDN). CDN has been available in the physical world for some time from different server hardware vendors. Basically it means you can identify a particular NIC by looking at the back of the box (NIC1, 2, 3, etc.), and the same name will be assigned in the OS. This make is possible to automate deployment; before CDN, there was no way to know which NIC name the OS would assign to a particular physical NIC.

CDN—Setting Network Name

Hyper-V in vNext takes CDN into the virtual world and allows you to define a name which is then passed into the VM so that scripts can assign the right settings to the right virtual NIC. Currently this is only supported on Generation 2 VMs, and it’s only applicable during the guest OS setup; you can’t change the NIC identifier afterwards. Once you have applied CDN through a VM template to a new VM, it can only run on vNext Hyper-V hosts; 2012 R2 and earlier don’t support it. You can use either a custom string or pass in the name of the virtual switch on your hosts.

Update Release 5 (UR5) for VMM 2012 R2, currently in public beta and due for final release in January 2015, contains an interesting new feature which will of course also make it into vNext: VMM management of SAN replication.

This is an extension to Azure Site Recovery that not only allows orchestration of Hyper-V replica between two of your datacenters but also lets VMM manage the replication of data from one SAN to another in your two datacenters. Currently eight partners have been named: EMC (VMax, VNX and VNX/e), Netapp (FAS), HP (3Par), Hitachi Data Systems (VSP), Fujitsu (Eternus), Dell (Compellent), Huawei (OceanStor) and IBM (XIV), with the first three supported in the beta of UR5.

You can do test failover in a similar fashion to what Hyper-V Replica allows, but here it uses SAN snapshots/VM cloning to create the test VM in the replica datacenter.

SAN replication also allows guest clusters (two or more VMs) connected to a SAN (Virtual Fibre Channel or iSCSI) to failover between your two datacenters in an orchestrated fashion.

There’s no doubt that VMM will continue to be a very important part of Microsoft’s Cloud OS vision, and I’m looking forward to more complete releases in the new year to be able to test the Storage QoS Controller and the Network Controller.

This July, we asked for software tips from the 2017 Microsoft Office National Champions, a set of charming teens who are officially the best at using PowerPoint, Word, and Excel. The Verge recently followed these teens to the World Championship in California, where they tested their Office skills in a contest that out-nerds the spelling bee.

In order to provide industry-standard compliance with the SWIFT 2017 Standards MT release 2017, Microsoft is offering, to customer's with Software Assurance, updates to the flat-file (MT) messaging schemas used with the Microsoft BizTalk Accelerator for SWIFT. The A4SWIFT Message Pack 2017 contains the following: Re-packaging of all SWIFT FIN message types and business rules...

Independent rendering allows the browser to selectively offload graphics processing to an additional CPU thread, so they can be rendered with minimal impact to the user interface thread and the overall visible performance characteristics page, such as silk-smooth scrolling, responsive interactions, and fluid animations. This technique was pioneered in Internet Explorer 11, and is key

Azure Service Bus .NET Standard client is generally available. With it comes support for .NET Core and the .NET framework. And as mentioned in an earlier post it also supports Mono/Xamarin for cross-platform application development. This is only the start of greater things to come.

The Azure Service Bus team is extremely excited to announce general availability of our Java client library version 1.0.0. It allows customers to enjoy a solid Java experience with Azure Service Bus as it comes complete with native functionality.