USB 3.0 technology is notable because it allows multiple streams of data transfer, increasing peak signaling bandwidth up to 5Gbp/s as compared to 480Mbp/s on USB 2.0. It is also designed for better power management.

The session is important because many PC manufacturers and peripheral vendors have already added USB 3.0 to their products, such as the Apple MacBook Air and MacBook Pro, the Toshiba TransMemory Pro 128GB USB 3.0 flash drive, and the HGST line of Touro S mobile hard drives.
Here are some of the topics the session promises to cover:

History of USB

Internal operation

Difference from USB 2.0

Best-practices for development

I caught up with Colin, who is an embedded technologist at Mentor Graphics Corp., and asked him what he plans to cover in the session. He noted that the biggest misconception about USB 3.0 is that it is just a faster version of USB 2.0. For embedded engineers, the biggest concerns are that USB 3.0 has some key new features and changes to implementation that software engineers need to understand. He said there are not the same kind of worries about USB 3.1, because it will not change anything that is particularly pertinent to a software engineer.

This session takes place on Wednesday, April 2 from 3:30 p.m. to 4:30 p.m. in the San Jose Convention Center 210A. For more on USB 3.0 Compliance Testing, see my post from last year.

That is a very interesting idea. The USB stick might also behave as a storage device and constain the host [PC, Mac, Linux ...] application that makes use of the "coprocessor". It would raise some interesting class driver issues, but but not insurmountable. If you're at EE Live!, I'd welcome a chat on this topic.

Yes, application dependent. Are there applications that don't fit efficiently on a standard CPU that might fit better on dedicated hardware connected as a 'stick' via USBV 3.0.

One example would be to have an FPGA co-processor that could be configured for different algorithms. You could stream data to it over the USB connection. If the algorithm needed lots of input data, many parallel processors (you could fit 100s on an FPGA) and a small amount of local memory it could be much faster than using the CPU.

DrFPGA: I am not quite sure what you have in mind. Obviously you can create a USB 3.0 enabled device with any amount of computing power. Whether the overhead of the protocol would be too great to offset the advantage of sending off some data for processing is entirely application dependent.

I'm interested in the new applications the higher data transfer rate will create. For example, is this fast enough to allow co-processing across the USB 3.0 connection? Could we see USB 3.0 sticks with specialized processors on the stick that would be faster than the host CPU?