The Interconnect Conundrum

On January 23rd, I moderated a panel at the biannual Platform Conference in San Jose titled “The Interconnect Conundrum: The Future Beyond PCI”. The panel was attended by around 150 design engineers and technical marketing people who work for companies that build motherboards, chipsets, interface boards, embedded communications and networking products, server I/O products, and so on. I thought it might be a good idea to review what was presented at the panel, especially since I came away wondering about the future of 3GIO, where it fits, and whether we need it at all. But first, I’ll begin by giving you bit of background about the various high-profile interconnects as I delivered to the audience that attended the panel. Then in a subsequent segment coming in a few days, I’ll provide highlights from the actual Q&A session with the panelists, and add in some of my observations. You can read about the panelists and their affiliations at this link.

You may recall at ExtremeTech, we provided an in-depth overview of many leading interconnect technologies at this link late last year, and I encourage you to check it out if you haven’t already. The panel covered similar ground, with a more current view of some of the key issues facing the industry and end users. Questions like how 3GIO compares to HyperTransport and InfiniBand, how HyperTransport is differentiated from RapidIO, where PCI-X and InfiniBand overlap, how well-suited particular interconnects are for using optical interfaces, and whether 3GIO has a bright future or not, were some of the areas covered by the panel.

PCI is a Bottleneck

There’s no question that as processors, memory, networking, storage, and graphics technologies have scaled in performance over the past five or six years, basic 32-bit/33MHz PCI, a technology developed in the early 90’s, ran out of steam for many data-intensive applications.

In fact, system architects are in general agreement that shared, multidrop bus technology has reached it limits for many high-performance applications, thus high-speed point-to-point, low pin count connections are deemed necessary going forward.

Recall in the PC space that PCI couldn’t keep up with graphics performance, and AGP came to the rescue about five years ago to boost bandwidth with its point-to-point interconnect. And a few years ago, Intel replaced their north-to-south bridge PCI interconnect with their proprietary Hublink technology– and we now see VIA with VLink, SiS with MuTIOL, and of course Nvidia is using HyperTransport in their nForce chipset (and Xbox). In server space we saw multiple PCI buses designed into boxes in the mid-90’s, and then servers moved to one or more 64-bit/33MHz or 64-bit/66MHz PCI buses.

Today we’re starting to see PCI/X in a few high-end servers to handle the increased bandwidth requirements of multiple Gigabit and 10Gbit Ethernet segments, Ultra160 and soon Ultra 320 SCSI, plus FibreChannel, and clustering interconnects.

Today’s communications and storage devices require high bandwidth and low latency interconnects both inside and outside the box, and of course CPU interconnects in MP systems are very sensitive to high bandwidth and low latency.

It’s very clear that with the rapid evolution in processors and I/O technologies, we need faster, scalable interconnects at many levels. But certainly, PCI, in its present form will be with us for many years to come for lower to midrange computing environments.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.