Updated Links on Windows Server 2012 File Server and SMB 3.0

In this post, I'm providing a reference to the most relevant content related to Windows Server 2012 that is related to the File Server, the SMB 3.0 features and its associated scenarios like Hyper-V over SMB and SQL Server over SMB. It's obviously not a complete reference (there are new blog posts every day), but hopefully this is a useful collection of links for Windows Server 2012 users.

This is a very comprehensive and superb post with all the necessary links to knowing/understanding SMB 3.0. I have a question on NIC Teaming which I hope I can get some answers on. In one of the video presentation (link in the blog post) there is mention of a NIC team that can be set as Active/Standby. When the Active is down, the standby kicks in and becomes Active. And when the original Active is recovered, the team automatically fails back to the original Active. Can this automatic fallback be prevented? i.e.: I do not want the team to switch the Active back to the original NIC, but keep the Standby running as Active, until such time when the Standby experiences a problem. (something similar to using the Smart Load Balancing/Auto-Failback Disable (SLB/AfD), when teaming NICs using the BroadCOM suite on Windows 2008 R2).

I apologize if this isn't the right place for posting such questions: I have a small business network and a client app written in .NET. Multiple users use this app to insert and query in rapid succession into and from an access database which is located on a network drive; a connection is opened once before the inserts and querying begins. The app then uses the oledbdataadapter.fill and .update methods with datatable, and I'm compiling with VS 2010 Express with updated references afaik.

The client works well with a single user whether he uses SMB 1.0 or 2.0, and it works fine with multiple users if they are all configured to use SMB 1.0. However, in a mixed environment of SMB 1.0 and 2.0 (which so far is unavoidable since we have XP machines we sometimes use) we often see the following exception thrown by oledbdataadapter:

"Your network access was interrupted. To continue, close the database, and then open it again."

Once in a while, the dataadapter will also just return an empty table. Our server is Windows Server 2010 and is using SMB 2.1. Any help is appreciated.

I have a team and a hyper-v virtual switch (hyper-v port / lacp) with 2 Server Intel i350-T2 (DualPort) Cards. RSS in the Windows Builtin-Driver is enabled on both Ports. The maximum Number of RSS Queues is set to 4.

The Switch is a Zyxel 1910-24 with LACP.

Effektiv Speed by copying Files from Host A to Host B is only 1 Gb / max.

@Dustyny1 – Have you baselined the storage subsystem on the server side? The most common bottleneck I have seen is the backend storage subsystem. If the storage does not perform locally, SMB 3.0 will not go beyond that.

You should be able to use NIC Teaming and SMB Multichannel and achieve the aggregated throughput of the team if you're using LACP and the right type of teaming (Address Hash). You might want to try increasing the number of SMB connections per RSS NIC (also applies to team) by using:

LACP on the team is certain a good way to go. The only other option is to use the Hyper-V port option, connect the team to the vSwitch and pull 4 ports off the virtual to the parent for SMB3 traffic (you typically only get one, but you can manually add 3 more using PowerShell). This will give SMB Multichannel the opportunity to use the multiple virtual NICs and properly balance the load.

The command for adding additional virtual NICs to the parent is:

Add-VMNetworkAdapter -SwitchName <name> -ManagementOS

You should run this 3 times in order to get 4 total NICs in the parent for SMB traffic. This will give each of them a unique MAC address (as if each were coming from a different VM), and the Hyper-V port option in NIC teaming will properly spread them across the physical NICs in the team.

Try this one out and let us know how it works for you. It should give you both the network fault tolerance and the aggregate bandwidth you are looking for, without requiring LACP or using a single switch.

I have a question regarding SMB 3.0 and NIC Teaming in Windows Server 2012. When creating a NIC Team with 2x10GbE you have to choose between the switch modes independent/dependent and address hash/hyper-v port.

Since independent switch mode can send on both nics but only receive on one nic is it true that I won´t get 20GbE both ways with this mode using smb multichannel?

If using LACP and dependent switch mode I would have to connect both ethernet cables to the same 10GbE switch for creating a LACP team, loosing redundancy.

We are about to design an environment with two SMB3 scale out file servers running active/active cluster connected with FC to SAN storage. These file servers will be connected with two 10GbE ports each to two separate HP 10GbE switches giving a total of 40GbE in theory. Hyper-V hosts (DL380G8) will also have an dual port 10GbE adapter in a NIC team for SMB3 traffic and Quad port 1GbE for VM Guest network.

To be clear, if your server is using SMB 2.1 (Windows Server 2008 R2) but your client is using SMB 1.0 (Windows XP), the negotiated session will be SMB 1.0.

Having said that, you seem to be having a specific issue with shared access to the a single file. SMB does allow for this, but the application should not open the files exclusively and locks are sometimes recommended to avoid common concurrent access issues. This is not diferent from having two instances of an application in the same computer accessing a file in a local disk.

I haven't used Access in a long while and I'm not sure what the expected behavior is. I would suggest you post the question to an access blogger or forum.

Any chance we'll see some articles on what to expect for performance, how to troubleshoot, and optimize?

I'm running in to a performance problem when using Hyper-V over SMB 3 and I'm not sure how to go about troubleshooting it. I'm using Mallanox connectx 2 Infiniband adapters and I can pass data at 3GBs so I have really fast interlinks but from my tests, SMB great for large sequential transfers (3GBs) where I max out the PCIe bus (8x) but on smaller transfers 4-512k performance and iOPs is really heavily impacted and writes speed falls off a cliff (20-40MBs).

I am interested in the client side of the equation also. In particular, I am looking for more information on the transparent caching (Windows 7/8) features and how they interact on the client side with the new SMB 3.0 services.

Can you point me in the right direction to getting more internal details – possibly a contact who might be familiar with the internal details and performance of the various interactions?

I'm having strange behaviours. I've done a NIC teaming with LACP to provide channel aggregation to multiple protocols, I've disabled RSS because not all NICs are RSS capable, but file transfers between 2 servers stay at 2gb/s even if I have teamed 4 nics.