Some of you are likely new to the T&M world and don't know much about GPIB, so I'll give a quick overview. GPIB had its beginnings a long, long time ago (in the late 1960s) as the Hewlett-Packard Interface Bus (HP-IB). In 1975, the IEEE standardized the bus to IEEE-488.1, General Purpose Interface Bus (GPIB).

As a young man (left), Joe Keithley made measurements one at a time and write them on paper. Years later (right), Keithley recorded measurements through buses such as GPIB. To see the full October 1992 cover of Test & Measurement World, click here.

GPIB connects multiple pieces of test equipment to a computer, allowing remote control and data acquisition. It was, at one time, a great boon to the test world. We no longer required someone to go around, perform individual measurements, and then write them down (hopefully neatly), and then have someone else put them all into some usable format.

GPIB connector pinout.(Source: National Instruments)

Individual instruments are connected by a series of cables. In theory, you could have up to 15 devices (31 if you use an extender) in either a linear or forked arrangement. This was made possible by the connectors.

Each connector has eight bidirectional I/O lines and a few handshaking signals (see diagram). The cables are approximately ½ inch or so in diameter.

It seems pretty good, doesn’t it? Even I admit it. And when there was nothing better, it was pretty sweet. But then reality set in, and all the way things work that ain't the way they're supposed to showed up. Let's take these pitfalls in small steps.

I work for a company that still sells gpib, for Linux and Windows. When people claim things are flakey, almost always they're using cabling way out of spec. Originally it was 2 meters per load (generally per device), but in 1987 it was cut back to 1 meter per load, when the speed was jacked up to 1 MB/sec. After all these years, it's still astonishing how many people get it wrong. I've seen it wrong in HP/Agilent manuals, in NI documentation, elsewhere. People want the 'high' speed and the longer cabling, and get upset when it doesn't always work. There's actually quite a bit of margin in the definition, but some people always push it until it breaks.

And yes the plastic clips on Ethernet cables, and the friction fit of USB just aren't appropriate for industrial control applications. Setting dip switches for addresses can be a pain, but it pales in comparison to the frustrations of getting a LAN device working in some environments, particularly when the IT department is half way around the world.

"GIPB continues to amaze people with its persistence. While so much in the tech world is very fast to change to newer cheaper and better technology, GPIB remains surprisingly entrenched in the test and measurement arena."

The problem is that the equipment on the other side also needs thumb screws.

I've played a bit with USB high retention force connectors (available for standard size A and size B); they help. Even better, Ampenol makes a locking type A (PDF!) connector that works with any standard type A USB cable -- but neither type is very common, although I have seem some industrial equipment advertised with the high retention connectors.

For one customer, we had to add a little machined (which means expensive!) bracket next to the connector so they could cable-tie the USB cable to the bracket.

Three decades ago, my home and lab computers required a GPIB interface to the external hard drive and certain other peripherals. As noted, the cord was expensive, inflexible, short, and required an expensive interface. The ability to stack and daisy chain the (bulky) connectors was an advantage. I'd say that for a consumer, USB has rendered the GPIB interface obsolete and irrelevant. Computer manufacturers seem to agree (indeed most laptops are thinner than the connector). In a highly technical instrumentation lab where latency rules, I'll concede they may wish to continue connecting their test and measurement devices with GPIB interfaces.

While I share much of the sentiment regarding GPIB and its disadvantages, I have to disagree with the conclusion that it is ready to die.

GPIB's ace is latency. At around 30 times lower than ethernet and 4 times lower than USB, GPIB still wins when speed is critical and data transfer sizes are low. This is generally the case in production testing. While 1000 microseconds of latency does not seem like much, a test sequence for a complex wireless device may have up to 20,000 measurement transfers of a few bytes each. 1000us latency each time adds 20 seconds of dead time to the test sequence, reducing the throughput and increasing test cost by as much as 20%.

National Instruments has a number of papers on this subject on its website: http://www.ni.com/white-paper/3509/en/#toc2

PXI has the advantage of very low latency and high bandwidth PCI Express, which makes it a great choice for speed critical testing such as production test. For those using discrete instruments while concerned about test times, GPIB is still the way to go.

Kudos to Hewlett-Packard for developing an interface that has endured for over 40 years. Calls for its demise are somewhat premature.