Overloading Your Server with Multi-Homing

Did you know that Windows NT 3.51 (and nothing suggests this situation will change in 4.0) is not always the easiest OS in the world to troubleshoot? In the Lab, we keep finding this out--the hard way.

In our continuing efforts to bring you meaningful performance test results, we had a neat idea: Set up a heterogeneous network with every known protocol and as many different network types as we have cards for, in a multimaster domain model. Then we can test any software or hardware in any configuration and have transparent access to the Internet, our corporate LAN, and all the systems in the Lab. Pretty nifty, huh?

Well, not really. Experienced network administrators will ask, "Why do you even want to?" or, "You really think you're gonna make that work?" Neither question is far off the mark. How many corporate LANs have Ethernet 10Base T, 100Base TX, 100Base T4, and 100VG all connected to the same computer? If you do, we'd like to hear about it.

We wanted this setup to centralize system and network administration on one machine and to route traffic among all the networks we test. So here's what we did: In a Digital Prioris HX 5133DP server (dual 133MHz Pentium, 64MB RAM, 4GB disk), we installed a card for each of the above network types. Then we gave them all IP addresses in the same subnet and created multi-homing (for information, see Ed Tittel and Mary Madden, "Multi-Homing on the Web," September 1996) so that we'd have no conflicts with corporate computers or outside addresses. (We stayed within our licensed address range.) Our intention was for this system to operate as the Primary Domain Controller (PDC) for all systems in the Lab.

The plan didn't work so well. First, the PDC screeched to a halt. It booted and ran slowly, network accesses easily confused it, and it crashed at every opportunity. Second, nothing really worked: Because networked systems couldn't see the PDC, it couldn't route traffic, and the domain administration didn't function.

This mess happened for several reasons. We learned that although NT lets you do all this, it really isn't meant to--Microsoft just gives you enough rope to hang yourself. All kinds of conflicts start occurring: The machine can't adequately handle the I/O interrupts for that many network adapters; you can't have more than one NIC on the same subnet, or the system doesn't know which card network traffic needs to go through; and you have to manually set up routing tables to convince the machine that being a multi-protocol router is okay--and even then the solution doesn't work cleanly.

To solve all these problems (well, most of them), we changed each card's IP address to one on a different subnet (204.56.55.XXX on one card, 204.100.100.XXX on another, and so on). The machine now works like a champ (except for some slowdown for the interrupt handling). At least it runs. The networks still can't see each other (we attribute this blindness to a Multi-Protocol Routing--MPR--problem), but they can PING.

We had to invent IP addresses for each network type--this solution is not the best. (For information about IP addressing and MPR, see Mark Minasi's column, "Gateways Revisited," on page 47.) We can no longer attach these systems to the Internet for fear of trampling someone else's address. You can do what you want on an isolated network, but if you throw the outside world into the mix, things get a lot more complicated.

What if you want to increase total throughput by setting up multiple network segments going into a single server? What if you need to route traffic among OSs and network types? What if you need to set up multiple IP addresses to run virtual servers on a Web machine? The world is running out of new IP addresses, so you may need multiple NICs in one server, but have a limited address range to work in. Your only choice is to buy multiple addresses and deal with the administrative hassle of maintaining different client systems attached to the same server but on different subnets. NT will let you operate multiple addresses in the same subnet on one machine, but that solution is far from ideal.

Perhaps something like Microsoft's new proxy server, Internet Access Server (IAS--formerly, Catapult) will make all this hassle moot. With it, you'll be able to mask your corporate IP range from the world and use one IP address to interface to the Internet. According to rumor, Service Pack 4 for NT Server 3.51 solves some of these multi-homing/multiple-NIC problems. (For information about IAS, see Mark Joseph Edwards, "Microsoft's Internet Access Server," September 1996, and "Configuring Microsoft's Internet Access Server," page 153.)

Discuss this Article 1

Joel Shandelman (not verified)

on Aug 13, 1999

Your October Lab Guys article, “Overloading Your Server with Multi-Homing,” was entertaining. As a UNIX networking professional with more than 10 years of TCP/IP experience, I’m amused that anyone would expect to multi-home a server/
workstation with all the interfaces on the same IP subnet.
Solaris handles multi-homing with ease. My SPARCstation10 has three 10 Base T interfaces and two Cisco FDDI S-Bus interfaces as a network monitoring station. This scenario demonstrates the robustness of Solaris to handle all the interfaces. The main difference between what you have tried and what I am doing is that I have each interface configured on a separate IP subnet. Also, the 10 Base T cards use a 24-bit subnet mask, and the FDDI cards use a 25-bit subnet mask.
IP routing issues affect the dynamic routing table inside the server. You can’t have multiple routes to the same subnet, each with the same number of hop counts (all 0 in your case). You can typically have one default route (generally, through the nearest router) and other dynamically learned routes, one per destination subnet. How did you ever expect your server to work with the routing constraints that the IP routing protocols (e.g., RIP) impose?
UNIX network administrators will make better NT administrators than those who grew up from Novell and Banyan environments. It’s not just the network operating system (NOS) that’s important, but also the underlying network protocols and their implied complexities. TCP/IP is better left for the UNIX guys and gals.
Keep up the good work. My subscription order is in the mail.
--Joel Shandelman