Wednesday, May 8, 2013

Hyper-V, ICS, RRAS NAT and port-forwarding on Windows Server 2012

With some friends we bought a quite strong computer and set up Window Server 2012 with Hyper-V virtualized machines for our projects. Computing power is much cheaper this way than in notebooks, so we can use lighter notebooks to be generally used as portable remote terminals. It works out quite well. RDP can be used well over 3G too.

However we started to face network problems on the server. At first, the computer was located on a company site, so the network configuration looked like this:

This required no special configuration, everything just worked as it should. The host machine and the VMs got an internal IP address from the router. To access VMs from the internet, we added port forwarding on the router.

Internal Switch

Then we decided to put the host machine to a server hosting room with a dedicated IP address. That required us to change the network configuration, as the External Switch would no longer work without a router in front of it.

With the Internal Switch the virtual machines share a virtual network that can also be accessible from the host machine. To have internet on the VMs, you have to set up NAT.

Internet Connection Sharing

Our very first try of configuration

1. The primary NIC has a fix IP address with fix gateway and DNS.

2. The NIC has Internet Connection Sharing turned on:

If the "Home networking connection" combobox doesn't appear, don't worry. If there's only one other elegible connection, then it's unshown.

When you click OK, the Internal Switch should be configured to 192.168.137.1/255.255.255.0

Havin this done, and having the internal switch assigned to the VMs, the VMs should now properly have internet connection without any further configuration. They get IP addresses like 192.168.137.x and 192.168.137.1 as gateway. If they don't, try disabling and enabling the internal switch, or configure them (while running) to have no network (Not connected), then configure them to use Internal Switch again. These might help.

If you want to open an RDP session to a VM, click Settings on the ICS screen above and add a rule:

Also make sure to enable inbound connections on the port 3390 of the host machine. Now you can open an RDP session on hostmachine:3390 and it will be forwarded to the VM. Of course, also make sure to enable remote access on the VM.

When testing this, make sure to open the connection from the outside internet. As I've found it in a random blog post, ICS doesn't route port forwarding from the intranet, so if you try hostmachine:3390 from the hostmachine itself, it probably won't work, but if you try from 'outside', it will.

This solution can work, but has some drawbacks:

The settings above sometimes just does not work. I haven't yet figured out what to restart and in what order, but sometimes you just have to reconfigure the entire thing. Enablind and disabling the Internal Switch, detaching and reattaching it to the VM, disabling and enabling ICS are options you can perform in any order.

Altought network connection over NAT is just as fast as the host connection, RDP over the port forwarding is unusably slow. I've found some forum posts about how to make it faster (NIC interface configurations) but they didn't work for me.

In some constellation, if I connect to a VM, it asks me for my credentials, but then the screen remains blank. Also my concurrent RDP connection to the host hangs up. After some 20 seconds the VM RDP connection closes and the host connection resumes. If this happens, try reconfiguring everything.

After rebooting the system I usually ended up losing RPD connectivity to the host machine. Every other thing worked: RDP to the VMs and even http worked on the host machine (apache server). To prevent this I had to explicitly add a port forwarding from hostmachine:3389 to 127.0.0.1:3389. Weird.

Also, at my first try the network connection to have packets lost. If I enabled ICS, the host RDP session only worked for 10 seconds, then hanged for another 10, then it took 10 seconds to reconnect. This was solved by updating my NIC driver with a newer one from the vendor (Intel).

Usually you should be able to connect to the VMs from the host by connecting to eg. 192.168.137.2:3389. They won't have a ping though, so don't worry about that.

Routing and Remote Access

The same configuration can be acheived with RRAS on windows servers. This feature cannot be used together with ICS. Once you added this role in the server manager, open "mmc" and add the routing and remote access snap-in and add the local server.

You can configure NAT by either selecting NAT directly in the wizard or by selecting custom configuration and just adding bare NAT, and then later adding interfaces yourself.

To use NAT, you have to have two interfaces added:

the external NIC added as "public interface connected to the internet"

the internal switch added as "private interface"

There are IPv4 settings if you right/click the local server. My setting is DHCP here, so no address pool is configured. In the NAT properties, Address Assignment tab:

192.168.0.0/255.255.255.0 is configured. The wizard will configure this for you if you previously configure your Internal Switch:

This produces the same stuff that ICS has. To add port forwarding you can go to the properties window of the public interface and click the services and ports tab. However, this has the same drawbacks as ICS port forwarding had: slow RDP sessions. They must have really messed up something there.

With this configuration you also should have access to VMs from the host machine directly, using their 192.168.0.x addresses.

I started to look for third-party port-forwarding tools, as it's quite an easy to implement stuff. Rinetd worked fine, but it doesn't run as a windows service. After a bit of googling I found that windows itself can do portforwarding (of course): http://technet.microsoft.com/en-us/library/cc731068(v=ws.10).aspx . So you can do this from an administrator command line:

netsh interface portproxy add v4tov4 3397 192.168.0.2 3389

Use netsh interface portproxy to display additional options. With this configuration, port redirection works fine and RDP sessions work properly. This command also implicitly creates a firewall exception for the public port.

In case stuff stop working, reconfigure NAT from scratch. Also some rebooting will help.

Summary

Having all this work properly took me painful weeks. Other forum posts also state that this functionality of the windows server is quite unstable. I feel lucky that I could finally come to a point where it works.

While this was not working I used VirtualBox and VMWare virtualization. They have this port-forwarding and NAT feature out-of-the-box and it just works with a few clicks. It's a shame that Hyper-V delegates this to ICS or RRAS, and finally you have to configure complicated stuff that don't finally work by default. Also, tutorials on technet just don't go into important details. They just say, 'enable NAT' and it will work. But it won't.

Also note that with vmware and virtualbox, if the host machine was connected to the company VPN, the virtual machines also accessed company intranet resources. With RRAS and ICS, VMs have to connect to VPN individually. Maybe there are routing settings that can solve this more easily, but after this torture, I wouldn't start configuring it.

I used Server 2013 R2 and have followed your internal switch track upto RDP (I don't need RDP). I did managed to get IP addresses like 192.168.137.x and 192.168.137.1 as gateway after disabling and enabling. But then, the host physical network was repeatedly unplugged and resumed every now and then. Any idea? Unique to R2?