We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211

The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic.

We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway:

Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue?

THINGS WE HAVE TRIED

I added a static arp entry for testing on one of the linux gateways which still didn't help.

Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back.

Swapping network cards and switches

I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card.

(We also tried another switch we have on hand, no change)

Changing network hardware driver versions

We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2.

Replacing network cables

As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred.

Disable checksum offload, remove TProxy

We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps.

Switch Virtualization providers

On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change.

Switch host model

We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model:

Never trust auto settings on a production environment. Set the speed to what it should be, and put a monitor on it to be sure.
–
Daniel C. SobralJan 21 '10 at 11:25

3

@Daniel Sobral: I have to heartily disagree with you. In 2003 I suppose I could see that. With modern hardware, hard-setting port speed and duplex is a recipe for getting speed / duplex mismatches. Autonegotiation on modern Ethernet gear works fine.
–
Evan AndersonJan 21 '10 at 13:10

I stand with @Daniel Sobral, too many times I've had network failures caused by bad speed negotiations at the worst moment, so on production systems I go with static settings. When that happens, what does the link state on the switch says? It is managed, right? What does the Windows system say? I would bet on network failing at link level, and that is what is causing those ARP incompletes (failed or waiting to receive ARP who-has). Bad hardware/driver could be a cause. Lets see how it goes after swapping.
–
Pablo AlsinaJan 21 '10 at 14:11

9 Answers
9

If no ARP cache entry exists for a requested destination IP, the kernel will generate mcast_solicit ARP requests until receiving an answer. During this discovery period, the ARP cache entry will be listed in an incomplete state. If the lookup does not succeed after the specified number of ARP requests, the ARP cache entry will be listed in a failed state. If the lookup does succeed, the kernel enters the response into the ARP cache and resets the confirmation and update timers.

It looks like your gateway box is not responding (or responding too slowly) to ARP requests from your gateway box. Does that <incomplete> eventually switch to <failed>? What network hardware do you have between the the server and the gateway? Is it possible broadcast ARP requests are being filtered or blocked somewhere between the two hosts?

It means that you pinged the address, the IP has a PTR record (hence the name) but nothing responded from the machine in question. When we see this it's most commonly due to a subnet mask being set incorrectly - or in the case of IPs bound to a loopback interface that were accidentally bound to the eth interface instead.

What is 196.220? What is it's relationship with 196.211? I'm assuming that .220 is one of the HA Proxy hosts. When you run ifconfig -a & arp -a on it what does it show?

If it's happening intermittently, though, that tends to make me think that it's not an incorrectly set subnet mask (which, admittedly, is often the cause of machines failing to answer ARP requests).
–
Evan AndersonJan 20 '10 at 22:23

The post seems fairly clear to me. The .211 IP address is a virtual IP shared by the HAProxy instances. The .220 IP address is assigned to a Windows machine that, periodically, loses its ability to communicate with the .211 IP address (as can be seen in the "Interface:" line of the ARP output quoted in the post).
–
Evan AndersonJan 20 '10 at 22:43

196.220 is the ip of the failed windows server - 196.211 is the virtual ip for the haproxy interfaces.
–
Geoff Dalgas♦Jan 20 '10 at 22:50

As Max Clark says, the <incomplete> just means that 69.59.196.211 has put out an ARP request for 69.59.196.220 and hasn't received a response yet. (In Windows-land you'll see this as an ARP mapping to "00-00-00-00-00-00"... It seems odd to me, BTW, that you're not seeing such an ARP mapping on 69.59.196.220 for 69.59.196.211.)

I tend not to like to use static ARP entries because, in my experience, ARP has generally done its job all the time.

If it were me, I'd sniff the appropriate Ethernet interface on the "failing" Windows machine (69.59.196.220) to observe it ARP'ing for 69.59.196.211, and to observe how / if it's responding to ARP requests from 69.59.196.211. I'd also consider sniffing on the gateway machine for ARP only (tcpdump -i interface-name arp) to see what the ARP traffic looks like from the side of the Linux machine.

I know, from the blog, that you've got a back-end network and a front-end network. During these outages, does the "failing" Windows server (69.59.196.220) have any problems communicating to other machines in the front-end network, or is it just having problems talking to its gateway? I'm curious if you're coming at the failing machine thru the front-end or back-end network when you're catching it in the act.

What are you doing to "resolve" the issue when it occurs?

Edit:

I see from your update that you're rebooting the "failing" Windows machine to resolve the issue. Before you do that next time, can you verify that the Windows machine is able to "talk" on its front-end interface at all? Also, grab a copy of the routing table from the Windows machine (route print) during a failure, too. (I'm trying to ascertain if the NIC / driver is going bonkers on the Windows machine, basically.)

When this issue occurs we can reboot the failed web server (196.220) and it will work - our experience has shown that within 24 hours it will fail again.
–
Geoff Dalgas♦Jan 20 '10 at 22:52

1

It would be interesting to know if the server was able to talk, at all, on the NIC attached to the segment with the .211 machine (which, I understand from your updated, is now swapped with the back-end segment). My gut says "bonkers NIC" is going to be the root cause on this one, but we'll see...
–
Evan AndersonJan 21 '10 at 13:31

1

When this happens, the machine definitely cannot talk on the front end (public) NIC at all. The back end (private) NIC is unaffected. I have always felt it was the NIC driver going bonkers, but the question is "why"? (also: this happens with the latest broadcom driver as well as the default Wink28 R2 driver) I'm going to check the event logs after it reboots, which takes 10+ minutes as it has to eventually bluescreen as part of the shutdown first. I cleared them beforehand.
–
Jeff Atwood♦Jan 27 '10 at 21:10

we are now involving Microsoft support as we honestly believe this is an OS level issue. We've done every possible bit of troubleshooting we possibly can and ruled out.. well, everything.
–
Jeff Atwood♦Apr 22 '10 at 1:54

The reason the static ARP on the haproxy node doesn't help is that your web server still can't figure out how to get back to the gateway.

Static ARP on the web server breaks the ability for your web servers to switch gateways when one of the haproxy nodes failed -- I'm guessing the virtual interface shares the same MAC address as the haproxy node's eth1, so you'd have to hard code to one of the two gateways into each web server.

Do you have any kind of security software installed on the failing web server? I spent a long night with a Windows 2008 server that had Symantec Endpoint Security on it -- it installs some filtering code in the networking stack that prevented it from seeing the gateway's ARP packets at all. The fix for that (as provided by Microsoft) was to remove the registry entry that loaded the DLL.

The other time this problem occurred, removing the whole network adapter from device manager and reinstalling seemed to help.

In fact, thinking about it, as the problem appears to be with ARP specifically, you could potentially just continuously record all ARP traffic on the HAproxy machine and the Windows machine in question, with a rolling capture file of (for argument's sake) 10MB. That should be large enough such that by the time you've detected a failure, the capture file will still contain the ARP traffic from before the failure. (It's worth experimenting by running the capture for an hour or so, to see how much data it generates).

Example capture syntax for Linux tcpdump (note, I don't have a Linux box handy to test this on; please test the behaviour of -C and -W before using in production!):

tcpdump -C 10 -i eth1 -w /var/tmp/arp.cap -W 1 arp

This should hopefully give you some indication of what precisely is failing. When an ARP entry expires (and according to this article, newer versions of Windows appear to age out 'inactive' entries very aggressively), I would expect the following to happen:

The source host will send an ARP request to the target host. ARP requests are generally broadcast, but in the case where a host is refreshing an existing entry, the ARP may be sent unicast.

The target host will respond with an ARP reply. 99% of the time this will be unicast, but the RFC permits broadcast responses. (See also the RFC regarding IPv4 Address Collision Detection for more detail).

Simple as it sounds, there are a bunch of other things that may interfere with this process:

The original request may not be arriving at the target.

The request may be arriving at target, but the response may not be reaching the source.

Some sort of high availability mechanism may be interfering with the 'normal' behaviour of ARP:

How does failover between the HAProxy nodes work? Does it use a shared MAC address, or does it use gratuitous ARP to fail an IP address over between nodes?

A lot of the MAC addresses in the ARP tables above begin with 00-15-5D, which is apparently registered to Microsoft. Are you using any form of clustering or other HA on the Windows machine in question? Are these 00-15-5D MAC addresses the same ones you see associated with the hardware NICs when you do an 'ipconfig /all' on the Windows server?

Things to check if/when this happens again:

Look at the packet captures of ARP traffic; has any part of the conversation obviously not occurred?

Check the switch's bridging/CAM tables; do all the MAC addresses in question map to the ports you expect them to?

Do other hosts on the subnet have valid ARP entries for the IP addresses of both the Windows and HAProxy hosts?

Do ARP entries for the same target IP on multiple different source machines resolve to the same MAC address? i.e. log on to a couple of other hosts on the subnet and verify that 196.211 resolves to the same MAC address on both.

we are definitely looking at packet captures now
–
Jeff Atwood♦Jan 28 '10 at 20:15

unfortunately the packet captures didn't show us anything obvious, and the machine we captured on has sensitive network traffic.. so we can't give it to experts to look at.
–
Jeff Atwood♦Mar 12 '10 at 11:17

@Jeff: could you provide captures showing only the ARP traffic? I'd be interested to see the ARP behaviour if nothing else.
–
Murali SuriarMar 12 '10 at 15:56

we followed MSFT support's directions on whatever data they want captured -- it took a few weeks, but eventually they found a private kernel networking hotfix for us.
–
Jeff Atwood♦Jun 11 '10 at 8:21

We had a similar issue with one of our 2008 R2 terminal servers where all traffic on the NIC would stop but stay connected, and the NIC LEDs would show comms. This was an ongoing issue that kept cropping up 2-3 times a week, but only after around 12-13 hours uptime (server is rebooted nightly).

I found Seriousbit Netbalancer was the cause, after I tried (out of curiosity) terminating the NetbalancerService service. Traffic then started moving across the interface. I've since uninstalled Netbalancer.