The Challenges of USB 3.0

After a three-year teething period, USB 3.0 is finally mainstream. Microsoft and Apple support USB 3.0 in their latest releases of Windows and Mac OS X, respectively; Intel and AMD support USB 3.0 in their latest chipsets. This creates critical mass, which will cause USB 3.0 peripheral deployment to ramp significantly in the coming year. This will fuel a new round of USB 3.0 adoption in embedded systems. This article explores the likely issues that developers will encounter when bringing up USB 3.0 devices.

The New Technology in USB 3.0At this point, most engineers are familiar with the marquee features of USB 3.0. The USB 3.0 specification expands the USB 2.0 spec in a number of ways. It adds SuperSpeed support: 5 gigabit raw signaling speed (compared to 480 Mbps for high-speed USB), dual-simplex bi-directional data flows (compared to half-duplex for previous USB technologies), finer-grained power management, an 80% increase in power for bus-powered devices, and the addition of streams support to allow for out-of-order completion of I/O requests.

If you’re developing a host stack, the view is slightly different. The improvements at the physical layer and the link layer are impressive, requiring new cables and test equipment. However, they don’t directly affect system software. The major issues that affect system software are:

The xHCI Host Controller

USB 3.0 Hubs

SuperSpeed throughput considerations

Support for USB 2.0 in devices

In the following sections, we’ll consider these points in more detail.

The xHCI Host ControllerThe first challenge of USB 3.0 host implementation is the new host controller. Intel defined a new host controller, the eXtensible Host Controller Interface (“xHCI”). This new host controller is not just an upgrade of the older EHCI, OHCI and UHCI host controllers that were used for USB 2.0 - it is a complete re-architecture of the USB host controller interface paradigm.

USB is a bus with a strong sense of time. Data transfers for time-critical data flows can be scheduled periodically, with guaranteed service. Data transfers for non-critical data flows are scheduled in the remaining time, using “best effort” policies. The older USB host controllers were relatively simple - they depended on software to set up a large set of data structures in memory, which the host controllers would traverse continuously to keep the bus running. In a sense, the host controllers didn’t know the big picture; they just walked through the data structures and did what they were told; the USB policies emerged from the structure of the work lists.

When moving to SuperSpeed, this didn’t work well. First, the host controller had to continually read memory to get the next schedule entry; this imposed extra load on the memory subsystem and the PCIe bus, and measurably affected system power consumption, especially on x86 architectures. Second, the schedules are hard to virtualize; if a USB bus is to be shared between several VMs, there’s no good way for the “virtual” bus controller to inform a virtual machine about the requirements of other VMs. Finally, the schedules are hard to create; this complicates the job of boot firmware unnecessarily and can affect device compatibility (due to differences in how system software creates the schedules).