Why do Duplex Mismatches Occur?

This is a really old story, but it needs telling.

10Base-T was originally designed to be a half-duplex medium, where everyone shared the 10mbps bandwidth. You would plug a number of computers into a hub and know that CSMA-CD (Carrier Sense Multiple Access with Collision Detection) would handle any resource contention.

Collisions were part of the design, and were an accepted norm.

In 1989, a company called Kalpana looked at this situation and figured that collisions could be dramatically reduced if the “hub” was able to act as an ethernet bridge instead. They created the first multi-port bridge and decided to name it something completely different: An Ethernet Switch.

The name caught on, and many companies decided they wanted to reduce the number of collisions they were having on their networks.

Switches became very popular as a network upgrade because no configuration was required — just unplug the hub and drop in a switch and BAM! Instant upgrade.

Along the way, the Ethernet card and switch manufacturers realized that something interesting could be done with the medium: Both sides could be configured to talk at the same time and collisions could be completely eliminated. This “full duplex” mode of communication sped things up even further.

At that time, the IEEE (this is the standards body that maintains the Ethernet specifications) did not include support for this full-duplex mode, so it was not well implemented by companies that wanted to offer this capability.

Duplex had to be manually set on adapters as well as switches. It was tedious to make sure that all connections were set properly, so companies responded to this with an “auto-configure” of duplex for the adapters and switch interfaces.

The auto-detection of interface speed (10megs, 100megs, 1000megs) was easy to accomplsh because the interface could sense the frequency that was being transmitted from the other side.

Auto-detection of the duplex setting of the remote side was quite complicated. The remote side would transmit a carrier signal superimposed on top of the 10meg or 100meg ethernet signal and hope that the receiving side would be able to properly “read” the signal.

The problem with this is that many adapters saw the superimposed signal as “line noise” and would filter it out rather than attempt to interpret it.

Since there were no standards defined to guide companies, problems occurred.

The switch port would “auto-configure” for full-duplex, and the NIC would default to half-duplex after it failed to read the superimposed signal.

This created an ugly situation: One side would transmit blindly thinking that it was safe to do so, and the other side would see massive collisions and CRC errors (because it was still attempting to respect the CSMA-CD ruleset).

The industry has not yet solved the problem (their solution is to recommend upgrading to Gigabit switches everywhere — Gigabit is always full-duplex and doesn’t have these problems!)

Gigabit isn’t the answer in many cases, as there’s currently no need for Gigabit to extend to the destkop except in rare cases (CAD design stations for example).

Another problem is that many offices are deploying VoIP phones that require Power over Ethernet (PoE). Gigabit Ethernet requires four pairs of wire to communicate. This means that gigabit and PoE are incompatible because there are only four pairs of wire in standard cable systems (10/100 only requires two pairs, and PoE requires two pairs so they can both co-exist in one cable).

Switch manufacturers have added support for PoE in Gigabit environments by adding “phantom power” to two pairs that also carry data signal. This works well in resolving the problem of having PoE co-exist with Gigabit to the desktop, but these switches are currently quite pricey.

This means that 10/100 Ethernet is destined to remain in use for the forseeable future, and duplex mismatches will also remain a problem in networks.