The Private Cloud is Windows Server & System Center. Virtualisation is not cloud. P2V didn’t’ change management. Look at the traits of a cloud in the NIST definition. Cloud-centric management layers change virtualisation into a cloud. That’s what SysCtr 2012 and later do to virtualization layers: create clouds.

Microsoft’s public cloud is Azure, powered by Hyper-V, a huge stress (performance and scalability) on a hypervisor.

Hosting companies can also use Windows Azure Pack on Windows Server & System Center to create a cloud. That closes the loop … creating 1 consistent platform across public and private, on premise, in Microsoft, and in hosting partners. The customer can run their workload everywhere.

Performance

The absolute best way to deploy MSFT biz apps is on Hyper-V: test, support, validation, optimization, test, test, test. They test everything on Hyper-V and Azure, every single day. 25,000 VMs are created every day to do automated unit tests of Windows Server.

In stress tests, Exchange (beyond recommended scale) tested well within Exchange requirements on Hyper-V. Over 1,000,000 IOPS from a Hyper-V VM in a stress test.

Storage

If you own a SAN, running WS2012 or newer is a no brainer: TRIM, UNMAP, ODX.

Storage QoS. You can cap the storage IOPS of a VM, on a per hard disk basis.

Linux has full dynamic memory support on WS2012 R2. Now we can do file system consistent backup of Linux VMs without pausing them. Don’t confuse it with VSS – Linux does not have VSS. It’s done using a file system freeze.

You can do shared VHDX to create 100% virtual production ready guest clusters. The shared VHDX appears as a SAS connected disk in the guest OSs. Great for cloud service providers to enable 100% self service. Store the VHDX on shared storage, e.g. CSV or SMB 3.0 to support Live Migration … best practice is that the guest cluster nodes be on different hosts

End of Ben in this session.

Demystifying Storage Spaces and SOFS

I‘ll recommend you watch the session. Jeff uses a storage appliance to explain a file server with Storage Spaces. He’ll probably do the same with classic SAN and scale-out file server.

Matt McSpirit comes up.

He’s using VMM to deploy a new file server cluster. He’s not using Failover Clustering or Server Manager. He can provision bare metal cluster members. Like the process of deploying bare metal hosts. The shares can be provisioned and managed through VMM, as in 2012 SP1. You can add new bare-metal hosts. There is a configurable thin provisioning alert in the GUI – OpsMgr with the MP for VMM will alert on this too.

Back to Jeff.

Changes of Guest Clustering

It’s a problem for service providers because you have previously needed to provide a LUN to the customer. Hoster’s just can’t do it because of customisation. Hoster can’t pierce the hosting boundary, and customer is unhappy. With shared VHDX, the shared storage resides outside the hoster boundary is the tenant domain. It’s completely virtualised and perfect for self-service.

SDN

The real question should be: Why deploy software defined networking (Hyper-V Network Virtualization). The primary answer is “you’re a hosting company that wants multi-tenancy with abstracted networking for seamless network convergence for hybrid clouds”. Should be a rare deployment in the private cloud – unless you’re friggin huge or in the acquisition business.

You can use OMI based rack switches, eg. Arista, to allow VMM to configure your Top Of Rack (TOR) switches.

Hyper-V Replica

HVR broadens your replication … maybe you keep your synchronous replication for some stuff if you made the investment. But you can use HVR for everything else – hardware agnostic (both ends). Customers love it. Service providers should offer it as a service. But service providers also want to replicate.

Hyper-V Recovery Manager gives you automation and orchestration of VMM-managed HVR. You install a provider in the VMM servers in site A and site B. Then enable replication in VMM console. Replication goes direct from site A to B. Hyper-V Recovery Manager gives you the tools to create, implement, and monitor the failover plans.

You can now choose your replica interval which defaults to every 5 minutes. Alternatives as 30 seconds and 15 minutes.

Scenario 1: customer replicates from primary hosts (a) to hosts (b) across the campus. Lots of pipe in the campus so do 30 seconds replica intervals. Then replicates from primary DR (b) site to secondary and remote DR site (c). Lots of latency and bandwidth issues, so go for every 15 minutes.

Scenario 2: SME replicates to hosting company every 5 minutes. Then the hosting company replicates to another location that is far away.

Michael Leworthy comes up to demo HRM. We get a demo of the new HVR wizards. Then HRM is shown. HRM workflows allow you to add manual tasks, e.g. turn on the generator.

About This Blog

This blog serves 2 purposes. Firstly, I want to share information with other IT pros about the technologies we work with and how to solve problems we often face. I've worked with technologies from the desktop to the server, Active Directory, System Center, security and virtualisation.

Secondly, I use my blog as a notebook. There's so much to learn and remember in our jobs that it's impossible to keep up. By blogging, I have a notebook that I can access from anywhere. It has saved my proverbial many times in the past.

Waiver

Anything you do to your IT infrastructure, applications, services, computer or anything else is 100% down to your own responsibility and liability. Aidan Finn bears no responsibility or liability for anything you do. Please independently confirm anything you read on this blog before doing whatever you decide to do.