SMB is a huge topic since Windows Server 2012 and together with the concept of Converged Networking there was one very important features missing, and this was QoS (Quality of Service) for SMB traffic. With Windows Server 2012 R2 Microsoft addressed that and added a new feature called SMB Bandwidth Limits. SMB Bandwidth Limits allow you do separate different types of SMB traffic and limit them.

There are three different default types of SMB categories:

Default – Access to File Servers for library storage, for example the System Center Virtual Machine Manager to copy files to the Hyper-V server.

VirtualMachine – The traffic between the Hyper-V hosts and the storage of the Virtual Machines

LiveMigration – In Windows Server 2012 Live Migration can make use of SMB and so you can also set a limit to Live Migration traffic

For example you could limit the Default SMB traffic and the Live Migration traffic and leave the Virtual Machine Storage traffic unlimited.

Enable SMB Bandwidth Limit

To set this up you have first to enable the feature over Server Manager or Windows PowerShell.

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

On some community pages my blog post started some discussions why you should use SMB 3.0 and why you should use Windows Server as a storage solution. Let me be clear here, you don’t need Windows Server as a storage to make use of the Hyper-V over SMB 3.0 scenario, you can use storage form vendors like NetApp or EMC as well. But in my opinion you can get a huge benefit by using Windows Server in different scenarios.

First you can use Windows Server together with Storage Spaces, which will offer you a really great enterprise and scalable storage solution for low cost.

Second you can use Windows Server to mask your existing Storage, by building a layer between the Hyper-V hosts and your storage. So you easily extend your storage even with other vendors.

At the moment there are not a lot of vendors out there which offer SMB 3.0 in there storage solution. EMC was one of the first supporting SMB 3.0 and with ONTAP 8.2 Netapp is now supporting SMB 3.0 as well. But if you want to build a SMB layer for a storage which does not support SMB 3.0. to mask your storage so you can mix it with different vendors or using it with Windows Server 2012 Storage Spaces, the solution would be the Scale-Out File Server cluster. Microsoft offers file server cluster for a while now, but since this was an active/passive cluster, this was not really a great solution of a Hyper-V storage environment (even if a lot of small iSCSI storage boxes are active/passive as well).

Basically what the Scale-Out File Server let you do it so cluster up to 10 file servers which all will share CSVs (Cluster Shared Volumes) like you know from Hyper-V hosts and present SMB shares which are created on the CSV volumes. And the great thing about that, every node can offers the same share this will be a active/active solution up to 8 nodes. Together with SMB Transparent Failover the Hyper-V host does not really get any storage downtime if on of the SOFS nodes fails.

For the storage guys out there think about the cluster nodes as your storage controllers. Most of the time you will have 2 controllers for fail-over and a little bit of manual load balancing where one LUN is offered by controller 1 and the other LUN is offered by controller 2. With the Scale-Out File Server you don’t really have that problem since the SMB share is offered on all hosts at the same time and up to 8 “controllers”. With Windows Server 2012 one Hyper-V host connected to one of the SOFS nodes and used multiple paths to this node by using SMB Multichannel, the other Hyper-V host connected automatically to the second SOFS node so both nodes are active at the same time. In case on of the SOFS nodes dies, the Hyper-V host fails over to the other SOFS node without any downtime for the Hyper-V Virtual Machines.

In Windows Server 2012 R2, Microsoft worked really hard to make this scenario even better. In Windows Server 2012 R2 a Hyper-V host can be connected to multiple SOFS node at the same time. Which means that VM1 and VM2 running on the same Hyper-V hosts can be offered by two different SOFS nodes.

Same Windows Server Failover Cluster Technology with the same management tools

Storage Spaces

As already mentioned you can use your already existing storage appliance as storage for your Scale-Out File Server CSVs or you could use Windows Server Storage Spaces which allow you to build great storage solution for a lot less money. Again, the Scale-Out File Server Cluster and Windows Server Storage Spaces are two separate things you don’t need a SOFS cluster for Storage Spaces and you don’t need Storage Spaces for a SOFS cluster, but of course both solutions work absolutely great together.

Microsoft first released there Software Defined Storage solution called Storage Spaces in Windows Server 2012 and this allows you basically to build your own storage solution based on a simple JBOD hardware solution. Storage spaces is a really cost-effective storage solution which allows companies to save up to 75% of storage costs in compare to traditional SAN storage. It allows you to pool disks connected via SAS (in Windows 8 and Windows 8.1 USB works as well for home users) and create different Virtual Disks (not VHDs) on these Storage Pools. The Virtual Disks, also called Storage Spaces, can have different resiliency levels like Simple, Mirror or Parity and you can also create multiple disks on one storage pool and even use thing provisioning. This sounds a lot like a traditional storage appliance right? True, this is not something totally different, this is something storage vendors do for a long time. But of course you pay a lot of money for this blackbox the storage vendors offer you. With Windows Server Storage Spaces Microsoft allows you to build our “own storage” on commodity hardware which will save you a lot of money.

This is not only just an “usable solution” this solution comes with some high-end storage features, which make the Storage Spaces and Windows File Server a perfect storage at low cost.

Windows Server Storage Spaces let you use cheap hardware

Offers you different types of resiliency, like Simple (Stripe), Mirror or Parity (also 3-way Mirror and Parity)

Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)

Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)

Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)

Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

As mentioned both of this techologies do not require each other, but if you combine them you get a really great solution. You can build your own storage based on Windows Server, which not only allows you to share storage via SMB 3,0 it also allows you to share storage via NFS or iSCSI.

A lot of concerns I have heard, was about scale of Storage Spaces. But as I can see scale is absolutely no problem for Windows Server Storage Spaces. First of all you can build up to 8 nodes in a single cluster which basically would mean you create a 8 node active/active solution. With SMB Multichannel you can use multiple NICs for example 10GbE, infiniband, or even faster network adapters. You can also make use of RDMA which brings latency down to a minimum.

To scale this even bigger you can go to way, you could setup a new Scale-Out File Server Cluster and create new file shares where virtual machines can be placed. Or you could extend the existing cluster with more servers and more shared SAS disks chassis which don’t have to be connected to the existing servers. This is possible because of features like CSV Redirected mode hosts can access disks from other hosts even if they are not connected directly via SAS, instead the node is using the Ethernet connection between the hosts.

New features and enhancements in Windows Server 2012 R2 and System Center 2012 R2

With the 2012 R2 releases of Windows Server and System Center Microsoft made some great enhancements to Storage Spaces, Scale-Out File Server, SMB, Hyper-V and System Center. So if you have the chance to work with R2 make sure you check the following:

Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)

Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)

Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)

Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

Rebalancing of Scale-Out File Server clients – SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes.

Improved performance of SMB Direct (SMB over RDMA) – Improves performance for small I/O workloads by increasing efficiency when hosting workloads with small I/Os.

Shared VHDX files – Simplifies the creation of guest clusters by using shared VHDX files for shared storage inside the virtual machines.. This also masks the storage for customers if you are a service provider.

Hyper-V Live Migration over SMB – Enables you to perform a live migration of virtual machines by using SMB 3.0 as a transport. This allows you to take advantage of key SMB features, such as SMB Direct and SMB Multichannel, by providing high speed migration with low CPU utilization.

SMB bandwidth management – Enables you to configure SMB bandwidth limits to control different SMB traffic types. There are three SMB traffic types: default, live migration, and virtual machine.

Multiple SMB instances on a Scale-Out File Server – Provides an additional instance on each cluster node in Scale-Out File Servers specifically for CSV traffic. A default instance can handle incoming traffic from SMB clients that are accessing regular file shares, while another instance only handles inter-node CSV traffic.

Another important part of SMB 3.0 and Hyper-V over SMB is the performance. In the past you could use iSCSI, Fiber Channel or FCoE (Fiber Channel over Ethernet). Now SMB 3.0 has a lot of performance improvements to make the Hyper-V over SMB scenario even work. But if you need even more performance you can use new feature which came with Windows Server 2012 and is of course also present in Windows Server 2012 R2 called SMB Direct, which supports the use of network adapters that have Remote Direct Memory Access (RDMA) capability. Network adapters with RDMA offer some great enhancements such as very low latency, increased throughput and low CPU utilization since the functionality is offloaded to the network card.

Advantages

Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed.

Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to server applications.

(Source TechNet)

Technology and Requirements

At the moment there are different versions of network adapters with RDMA capabilities, currently these are iWARP, InfiniBand or RoCE.

iWARP, is a simple solution which does not really need any more configuraiton

InfiniBand,

RoCE (RDMA over Converged Ethernet), which needs also Switches to be configured in the right way for bandwidth management (DCB/PFC)

On the software side you need Windows Server 2012 or Windows Server 2012 R2 with SMB 3.0. SMB Direct is not supported in previous versions of SMB and Windows Server.

Setup of SMB Direct

Well SMB Direct or RDMA if oyu will is enabled by default, so Windows Server will make use of it when ever possible. But there are some things you have to make sure.

Which type of RDMA am I using, is it iWARP, InfiniBand or RoCE. Some of them maybe require additional configuration on the network. If you are using RoCE RDMA seems to work without configuration but you can run into performance issues as my fellow Microsoft MVP Didier van Hoye descripes in his blog post.

Install the latest NIC drivers

Install the latest firmware

Enable SMB Multichannel if you disabled it. SMB Direct will be also disabled when you disable Multichannel.

In a Failover Cluster make sure that the RDMA NICs are also marked as client access adapters.

SMB Direct doesn’t work with NIC Teaming or Virtual Switches

On the file server you should also tune performance by disabling hyper threading, Disabling processor C States and setting the power profile to full power.

Blogs

Jose Barreto wrote some great blog posts how you can setup your Hyper-V over SMB by using different RDMA network adapters. Checkout these blog posts:

Verify SMB configuration

Verify if RDMA is enabled, first cmdlet checks if it’s enabled on the server it self, second one checks if it’s enabled on the network adapters and the third checks if the hardware is RDMA capable.

1

2

3

Get-NetOffloadGlobalSetting|Select NetworkDirect

Get-NetAdapterRDMA

Get-NetAdapterHardwareInfo

Verify that SMB Multichannel is enabled, which confirms the NICs are being properly recognized by SMB and that their RDMA capability is being properly identified.

On the client:

1

2

Get-SmbClientConfiguration|Select EnableMultichannel

Get-SmbClientNetworkInterface

On the server:

1

2

3

Get-SmbServerConfiguration|Select EnableMultichannel

Get-SmbServerNetworkInterface

netstat.exe-xan|?{$_-match"445"}

And as already mentioned in the SMB Mutlichannel blog post, you can verify the SMB connections:

1

2

3

Get-SmbConnection

Get-SmbMultichannelConnection

netstat.exe-xan|?{$_-match"445"}

And of course you have some great performance counters.

If you run some copy jobs you can see the amazing performance (if your storage is fast enough). Here you can also see a print screen with Mellanox ConnectX-3 Ethnernet adapters which are using RoCE in Windows Server 2012. You can see that you don’t see any TCP traffic in the Task Manager on the RDMA NICs.

As already mentioned in my first post, SMB 3.0 comes with a lot of different supporting features which are increasing the functionality in terms of performance, security, availability and backup. Here are some quick notes about some of the features which make the whole Hyper-V over SMB scenario work, this time SMB Multichannel.

SMB Multichannel was designed to solve two problems, first make the path from the Hyper-V host (SMB Client) to the File Server or Scale-Out File Server (SMB Server) redundant and get more performance by using multiple network paths. If you are using iSCSI or Fiber Channel you use MPIO (Multipath-IO) to use multiple available paths to the storage. For normal SMB traffic you may be used NIC Teaming to achieve that. SMB Multichannel is a much easier solution which offers great performance. SMB Multichannel will automatically make use of different network adapters which are configured with different IP subnets.

In my tests and the environment I have build for customers, I have seen great performance with SMB Mutlichannel. It works “better” as NIC Teaming because most of the time you just get one active network interface except you use LACP and stuff like which requires the configuration of network switches with cheap switches you may lose redundancy. The same with MPIO, MPIO most of the time works great but you can sometimes not get the performance you should get in an active/active configuration. With SMB Multichannel I can simply bundle two or even more NICs together and Multichannel will make use of all of them.

Btw. SMB Mutlichannel is also a must have if you are using RDMA NICs, because of the redundancy you only get with SMB Multichannel.

SMB Mutlichannel and NIC Teaming

SMB Mutlichannel does also work in combination with Windows Server NIC Teaming. But the feature SMB Direct (RDMA) does not allow you to use NIC Teaming or the Hyper-V Virtual Switch, because you would loose the RDMA functionality.

How do you setup SMB Multichannel?

Well the setup of SMB Multichannel is quite easy because it’s enabled by default, but there are some things you should now about SMB Multichannel for designing your environment or just to troubleshoot an installation.

Verify SMB Multichannel

Verify that SMB Multichannel is active on the client

1

Get-SmbClientConfiguration|Select EnableMultichannel

Verify that SMB Multichannel is active on the server

1

Get-SmbServerConfiguration|Select EnableMultichannel

You can also disable or enable Mutlichannel if you need to.

1

2

3

4

5

Set-SmbServerConfiguration-EnableMultiChannel$false

Set-SmbClientConfiguration-EnableMultiChannel$false

Set-SmbServerConfiguration-EnableMultiChannel$true

Set-SmbClientConfiguration-EnableMultiChannel$true

To see if all the right connections are used you can use the following commands on the client to see all the SMB connections and verifiy that all SMB Mutlichannel connections are open and using the write protocol version.

1

2

Get-SmbConnection

Get-SmbMultichannelConnection

SMB Mutlichannel Constraint

Another thing you have to know about SMB Mutlichannel is how you limit the SMB connection to specific network interfaces. For example, you have a Hyper-V hosts which has an 1 gigabit network adapter for management and stuff and you have two 10Gbit (or greater) RDMA interfaces which should be used for the connection to the storage, you want to make sure that the Hyper-V hosts only uses the RDMA network interface to connected to the storage. With a simple PowerShell cmdlet you can limit the Hyper-V host to only access a file shares with the RDMA interfaces.

Fileshare where the Virtual Machines are stored: \\SMB01\VMs01

Name of the 1 GBit network interface for Management and stuff: Management

Make sure you run this on every Hyper-V host, not on the SMB file server.

Well there is a little bit more behind SMB Multichannel but this should give you a great jumpstart into this feature. If you want to know more about SMB Multichannel checkout Jose Barretos (Microsoft Corp.) blog post on The basics of SMB Multichannel.

With the release of Windows Server 2012 Microsoft offers a new way to store Hyper-V Virtual Machine on a shared storage. In Windows Server 2008 and Windows Server 2008 R2 Hyper-V Microsoft did only offer block-based shared-storage like Fiber channel or iSCSI. With Windows Server 2012 Hyper-V Microsoft allows you to used file-based storage to run Hyper-V Virtual Machine from via the new SMB 3.0 protocol. This means Hyper-V over SMB allows you to store virtual machines on a SMB file share. In the past years I did a lot of Hyper-V implementations working with iSCSI or Fiber channel storage, and I am really happy with the new possibilities SMB 3.0 offers.

The common problem of block storage is that the Hyper-V host has to handle the storage connection. That means if you use iSCSI or fiber channel you have to configure the connection to the storage on the Hyper-V host for example multipath, iSCSI initiator or DSM software. With Hyper-V over SMB you don’t have to configure anything special because SMB 3.0 is built-in to Windows and supporting features like SMB Multichannel are activated and used by default. Of course you have to do some design considerations but this is much less complex than an iSCSI or Fiber Channel implementation.

How did they make it work

The first thing which was important was speed. SMB 3.0 offers a huge performance increase over the SMB 2.x protocol and you totally have to think about it in a different way. There are also a lot of other features like SMB Direct (RDMA), SMB Multichannel or Transparent Failover and many more which help in terms of performance, security and availability, but more on this supporting features in the next post.

Why Hyper-V over SMB?

Well I already mentioned a lot of reasons why you should use Hyper-V over SMB, but if you think about it there are there main reasons why you should use it.

Costs – Windows Server 2012 Hyper-V allows you to build cluster up to 64 nodes and if you build a clusters this size with fiber channel storage this will be quiet an investment in terms of fiber channel hardware such as HBAs, Switches and cables. By using Hyper-V over SMB you can reduce cost for infrastructure dramatically. Sure maybe you have already invested in a fiber channel storage and a fiber channel infrastructure and you don’t have to change that. For example if you have 100 Hyper-V hosts you may have about 200 HBAs and you also need fiber channel switches. What you could do with Hyper-V over SMB, you could create a Scale-Out File Server Cluster with 8 nodes which are attached to the fiber channel and present the storage to the Hyper-V hosts by using a SMB file share. This would save you a lot money.

Flexibility – Another point which I already mentioned is flexibility. By using Hyper-V over SMB you are removing the Storage dependency from the Hyper-V host and add the storage configuration to the Virtual Machine. In this case you don’t have to configure zoning or iSCSI initiators which is making life for Virtualization Administrators much easier. Here are two examples how IT teams can reduce complexity by using Hyper-V over SMB. First in small IT departments you may not have a dedicated storage team and if you have to add an new Hyper-V host or if you have to reconfigure your storage this can be a lot of difficult work for some people who haven’t much experience with the storage. In enterprise scenario you may have a dedicated Storage and a dedicated Virtualization team and in the most cases they have to work really closely together. For example if the Virtualization team adds another Hyper-V host, the Storage team has to configure the Storage for the host on the Storage site. If the Storage team makes changes to the Storage the Virtualization team eventually has to make changes to the Hyper-V hosts. This dependencies can be reduced by adding a layer between Storage and the Hypervisors and in this case this could be a Scale-Out File Server.

Technology – The third point in my list is technology. Microsoft is not really mention this point but since I have worked with different options like iSCSI, fiber channel or SMB I am a huge fan of SMB 3.0. Fiber channel is a great but expensive technology and people who have worked with iSCSI know that there can be a lot of issues in terms of performance. SMB 3.0 has some great supporting features which can help you increase performance, RDMA which is a technology which can increase networking performance by multiple times and SMB Multichannel which allows you to use multiple network adapters for failover and load balancing are working very well and let you make the most out of your hardware. Another part can be security if you think about encrypting iSCSI networks via IPsec you know that this can be something complex, with SMB Encryption there is a very easy solution for that on the SMB scenario.

I hope I could give you a quick introduction to Hyper-V over SMB and why it’s a good idea consider this in your deployment plans. In the next post I will quickly summarize the supporting features in SMB 3.0.

Microsoft announced the next version of the Windows Server platform called Windows Server 2012 R2 at TechEd North America. I already blog about what’s new in Windows Server 2012 R2 Hyper-V. This post will focus about Windows Server 2012 R2 Storage Spaces.

Storages Spaces

First there are new exciting new features in Windows Server 2012 R2 Storage Spaces. Microsoft first released there Software Defined Storage solution called Storage Spaces in Windows Server 2012 and this allows you basically to build your own storage solution based on a simple JBOD hardware solution. Storage spaces is a really cost-effective storage solution which allows companies to save up to 75% of storage costs in compare to traditional SAN storage. I also mention some standard features which are included in Windows Server 2012.

Pooling of disks – You can pool physical disks together and create multiple virtual disks on the storage pool even with different resiliency options.

Continuous availability – Storage Pools and Disks can be clustered with the Microsoft Failover Cluster so if one server goes down the virtual disks and file shares are still available.

Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)

Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)

Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)

Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

ReFS – The filesystem Microsoft first released with Windows Server 2012 is now also supported for clustering, which means it can be used as a CSV (Clustered Shared Volume). Thanks to Didier Van Hoye (MVP) for adding that to the list

Follow Me

About

My name is Thomas Maurer. I am a Senior Cloud Advocate at Microsoft. I am part of the Azure engineering team and engage with the community and customers around the world. I am located in Switzerland. I am focusing on Microsoft technologies, especially cloud and datacenter solutions based on Microsoft Azure, Azure Stack and Windows Server. Opinions are my own.