Posted
by
CmdrTaco
on Wednesday September 19, 2007 @08:18AM
from the i'll-believe-it-when-it's-on-my-laptop dept.

psychicsword writes "Intel and others plan to release a new version of the ubiquitous Universal Serial Bus technology in the first half of 2008, a revamp the chipmaker said will make data transfer rates more than 10 times as fast by adding fiber-optic links alongside the traditional copper wires." "The current USB 2.0 version has a top data-transfer rate of 480 megabits per second, so a tenfold increase would be 4.8 gigabits per second." This should make USB hard drives easier and faster to use."

1.5 seconds if you all of your components were fast enough. The drive won't be.

Exactly. There's still the slowdown associated with the mechanical aspects of the hard drive -- spin rate (RPM), average seek time (ms), etc. On top of that, most hard drive controllers are limited by the technology they use. For instance, a SATA hard drive, even plugged into a USB 2 or 3 port, is limited to 150 MB/s -- but, that's burst speed, not sustained transfer rate.

On top of that, most hard drive controllers are limited by the technology they use. For instance, a SATA hard drive, even plugged into a USB 2 or 3 port, is limited to 150 MB/s -- but, that's burst speed, not sustained transfer rate.

Indeed. And realistically, it's going to be a pretty short burst: most hard drives today only have something like 8–16MB of cache that might be filled by a smart lookahead algorithm, so your best case with current hard drive technology is that you'll get perhaps 1/10 of a second of high-speed data transfer before hitting the physical barriers.

I'm not sure this is directly applicable to this discussion, though, because AFAIK all current USB drives use different storage technology anyway. It's going to be the limits of that technology that tell us whether USB3's theoretical speeds will actually be useful with storage hardware available in the same time frame.

The quoted speed of USB 3 is probably the bus speed, i.e. it's shared by all devices connected to the same host. So one disk won't saturate the bus, but if you plug in a bunch of them the bandwidth won't seem so incredibly massive anymore. Then you have to consider the bandwidth reserved by isochronous devices etc.

1.5 seconds if you all of your components were fast enough. The drive won't be.

Exactly. There's still the slowdown associated with the mechanical aspects of the hard drive -- spin rate (RPM), average seek time (ms), etc. On top of that, most hard drive controllers are limited by the technology they use. For instance, a SATA hard drive, even plugged into a USB 2 or 3 port, is limited to 150 MB/s -- but, that's burst speed, not sustained transfer rate.

you people are all missing the point.

It's obvious that the 4.8 Gbps link is faster than the device... but recall that all USB devices on a port share the bandwidth. A faster link will allow a lot more devices to simultaneously transfer data at their maximum-possible speed.

One example: You'll be able to put a multichannel audio I/O device and hard disks on the same bus without worrying about dropouts, etc.

As for "Isn't 4.8 Gbps faster than the computer can handle?" That's true, and it's already the c

No Flash is slow to write , very fast to read. Hence Windows use of it for "ReadyBoost" caching. There is extremely low latency just not enough bandwidth to sustain high levels of I/O.

On the other hand , by introducing fiber into the link doesn't that take away the greatest part of usb ? being able to just fold up the cable and stuff it in your pocket along with a small hard drive ? I know I use it for restoring machines after catastrophic failures (yeah windows) and some times I don't go right back to my desk with the cable and drive and have to toss it in my pocket. I can't do that with fiber, it would fracture.

Frankly, this is just "USB fiberchannel". Why not USE fiberchannel??? Surely in mass "consumer" production we can get the chipset / transceiver / cable cost down... It would be nice to come up with better connector technology that protects the optics better however, but LC isn't THAT bad, and can be had for around $16 for a 3M long cable.

I can't find the exact article, but you should read this one about the effective USB 2.0 speed [everythingusb.com]. It states that the effective maximum speed is only about 40MB/sec, and that 60MB/sec can't be achieved due to overhead/software limitations; not sure if this is true now.

My experience with USB is that it has lousy thru-put do to the way the drivers manage data on the host side. We did some heuristic timing tests on USB to serial devices and found that most of them actually degraded the thru-put (time send and receive a packet) of an RS232 connection. We found that devices with Silicon labs ICs could get the desired thru-put, but it seemed they would take up an entire USB channel to do so.

Exactly... Firewire400 works well for video streaming from a DV cam because it has very little overhead. Even though USB2 supposedly does 480Mbps, it can't do DV because there's too much overhead.
Bottom line is, unless USB3 gets rid of the CPU dependency and overhead issues, I won't like it. Sure, with a "ten times" the performance, this won't hinder DV, but that doesn't make it good.
I hope they make it systemically-efficient, instead of just ramping clock rates to reach these speeds.

Bonus points if you hook a 108mb/s wireless lan adapter via USB and throw some large data files over it, watch your data speeds closely, and monitor system performance even closer.

Firewire (1394, ilink, DV port, whatever) really was the shit, not only fast, low overheads AND its a peer-2-peer setup, in a pinch you could daisy chain PCs with it for an impromptu 400mb/s lan.

Why didn't they just hang USB out to dry and get power into the eSATA spec and use that? At least then no extra chips would be needed on a mobo, external HDD would hookup with no loss in performance and we might finally see thumb drives that work natively with ANY os as... drives.

That's with today's drives. Keep in mind that this version of USB is not due to arrive on the market for another 1-2 years, and is expected to stay in use for many years after that. Furthermore, RAID enclosures aren't actually that uncommon.

The throughput of a USB connection does not equal its clock frequency as there is quite a bit of overhead in the protocol, so in reality it would be a fair bit less than the 600MByte/s approximation. Because it's a bus, the total bandwidth available can be split amongst multiple devices. With several high speed devices on the bus, 480Mbit per second might not seem so much like overkill.The current version of USB provide connections that can operate in an isochronous mode (see http://www.beyondlogic.org/us [beyondlogic.org]

Or having more than one mass storage device, video camera and microphone, speakers and so on hanging off the same USB hub you plug into your laptop. One connector for everything, and that without the expensive, model-specific docking stations we've been using so far.

I wonder how adding fiber optic links will affect size and power requirements of USB3 devices. Granted, small LED's use minuscule amounts of energy, but wouldn't having to squeeze in power supplies and photodiodes at each end of the cable make it more difficult to squeeze it all into the micro-USB-sized interfaces used on most phones and mp3 players?

Well, for one, the photo-emitters would be in the devices themselves, in the port, not the cable. Same with power supplies. The day a CABLE needs a power supply is the day mankind has royally fucked up with technology.

Currently I'm getting transfer rates of about 16 megabyte per second on hard drives connected via USB. That's roughly 160 megabit per second, whereas USB 2.0 can transfer up to 480 megabit per second. While I'm all for faster and better, the bottleneck seems to be elsewhere in this case.

In the controller, likely. I'm getting 30% higher transfer rates with FW400 than with USB 2.0 on the same external disk.

It's as least as likely the problem is in the protocol. USB is synchronous - data every packet received must be acknowledged in a return packet before the next data packet can be transmitted. That back and forth for each data packet means a lot of wasted time where the channel is essentially idle. Sometimes using a shorter cable can make a noticeable improvement.

Firewire has both synchronous and asynchronous modes. In async mode, a bunch of packets can be transmitted before any acknowledgment back is required. That's bad if the cables is flakey, since it will result in a lot of retransmits, but bad firewire cables are the exception, not the rule. So async is almost always way more efficient than synch. I'm pretty certain that you are using the async mode for talking to your disk.

As far as I know, the term 'Synchronous' refers to the use the same clock signal at both ends of the communication link to synchronize the data transfer; in that sense, USB wouldn't be synchronous.

I think the performance edge that Firewire 400 has over USB 2.0 has to do more with USB having a host/peripheral scheme, where the low level protocol operations rely on the host processor; while FW is more peer-to-peer oriented, so those I/O operations are carried by the device controller and the host controller

When I was working on my Master's thesis, I had to splice optical fiber a few times. Believe me, it's not easy.

Glass fiber is very flexible. You can bend it in any way you want, it won't break. You can cut it, but that takes considerable force. If you break the fiber, you'll break the copper wires as well.

Personally, I think the weakest point in such a cable will be the connectors. Getting the light from one fiber to another requires careful alignment. Any deviation might causes loss of signal. Getting dirt into the connector is probably fatal.

Oh good. Now I get to plunk down $20+ per cable for the latest USB standard. I really like that copper USB2 cables are just about down to free from some online retailers. Looks like that will not be the case with USB3.

In an interview after the speech, Gelsinger said there's typically a one- to two-year lag between the release of the specification and the availability of the technology,

In today's news, vendors worldwide urged one another to move quickly and get IPv6 deployed by the year 2025. When asked about the one or two year lag between the release of specs and the availability of the technology, vendors quickly pointed out the timeframe it took to implement Packet Over Bongo [eagle.auc.ca] and IPv6 for Refrigerators [glocom.org]. "It's been a long time in the making (IPv6) but we've finally succeeded in getting console connectivity to the fridge. We can now via a command prompt: finger lettuce" stated the happy refrigerator engineer. We never even knew of the existence of IPv4 for refrigerators. Engineers estimate another 20-80 year wait for IPv6.

While I appreciate USB's capability for backwards computability, I would much rather have a plug shaped in such a way that I didn't have to flip it over every time I try to plug it in. I don't know about you guys but this is one of the most annoying aspects of using my computer, and I run Windows!

This would also be a great time to make a universal "other side" of the cable, rather than having a different plug for every single USB device. I have a mini plug for my camera, a big square one for my printer, a 2.5 mm jack to charge my MP3 player, etc. All these cables make a mess. If all my devices could share one cable, I'd be much happier.

While I'm mildly annoyed by having to flip them over, I quickly learned to simply look at the plug before I stick it in. Having it be flippable would mean duplicating wires and/or contacts and would make the cables more susceptible to damage and more expensive.As for the ends... Blame the device manufacturers. There were originally 2 ends, the fat one and the flat one, and there was 1 of each on every cable. All the others with 2.5mm and other proprietary ends that work on nothing else are solely the fa

All the others with 2.5mm and other proprietary ends that work on nothing else are solely the fault of the OEMs. Nobody asked them to do it, it never made sense to do it, and it's just a huge pain in the ass for everyone.

It makes a whole lot of sense to an OEM who wants to be the only one who can sell their customers a cable despite an open standard. It's the same reason for every standard out there which suppliers have taken it upon themselves to add their own "enhancements" to.

Having it be flippable would mean duplicating wires and/or contacts and would make the cables more susceptible to damage and more expensive.

I don't think the idea is to have the cable flippable, but instead to have some indication in the shape of which way around it goes. Firewire connects have a rounded end and a squared off one, for example.

It could be worse, we could all be forced to use SCSI devices instead. That had a ton of port styles and shapes as well as internal and external versions of each. Also, you had to daisy chain your devices, secure them with a screwdriver/your fingers, configure each with its own ID number and make sure the end of the chain was terminated properly. Then to top it off, cables, adapters, and terminators were insanely expensive. I'd swear that every time I bought a new SCSI device that it was like playing a puz

While I appreciate USB's capability for backwards computability, I would much rather have a plug shaped in such a way that I didn't have to flip it over every time I try to plug it in. I don't know about you guys but this is one of the most annoying aspects of using my computer, and I run Windows!

You know, you should only need to flip it over half the time. If you really need to do it more often than that, either your subconscious is an a*hole that likes messing with your self; or the world really is out to get _you_, specifically.

Also, did you know that the USB connector is exactly as wide as an RJ45 network connector. No, neither did I, until my mother plugged her mouse into the ethernet port. (:-)

Seriously, the connector should have double the number of pins as it needs, and they should be symmetrical. That would also increase reliability, because if the cable didn't work one way because of a bad pin, just flip it over until you can buy a new cable.

Working on computers that put the usb ports 1" from the ground. Or even better hide them between the network port and all of the other ports. Real easy to access either port location when the computer is shoved where ever it fits and is out of the way. How about moving the ports to the top of the computers and have them lit up all them time so that you can find them instantly? Would that be so freaking hard? You can even put a little cover over the ports with your oem logo on it or something.Best usb port d

Interesting. That should put the last nail into Firewire's coffin (and FW 800's). I wonder if we'll see USB 3.0 eat into SATA's market with internal USB3 drives on desktop and laptop machines. That could make desktops cheaper - ditching the IDE/SATA controller means one less component.

Last nail? They haven't even started fitting Firewire with a coffin yet. The only thing dying in this equation is USB2, and thankfully so. So it is faster than USB...joy, my mouse, printer and keyboard all now react the same speed, regardless of the interface speed boost. What's the point of USB2 again? If it doesn't boost the speed of my peripherals at least as well as my existing firewire400/800 ports, then why do I need it? It IS a pity that iPods stopped shipping with firewire and went to USB2, but that was a business choice to have maximum compatibility. Too bad my newer iPods can't transfer 2 songs per second anymore, like they used to when they were firewire400. But like many others have said, USB2 is "good enough" so it is really hard to get excited about USB3, which will be "gooder enough?"

If they design it right, using the wrong type of cable will only degrade performance, your device will still work at USB 2.0 speeds. It's also likely that your computer will negotiate with the device so that if the device says 3.0 but there's no signal on the optical link, the computer will tell you. Any system should be designed to fail gracefully.

Give us a standard that actually delivers enough power that you don't need an additional power cord for just about every other device already...:/

According to Intel, the new USB 3 standard will use fiber optic cable for data as well as power. The data will be modulated on a high-powered laser light signal, enough to deliver the power to spin up a harddisk, or, alternatively, burn through one solid oaken office door as well as the sales guy who was about to open said door.

This article sent me on a reading binge about all these different specs and ways to connect and yeah, it seems like eSATA would be the obvious choice. Maybe SATA is only for hard drives and not for things like flash sticks? I don't know.

I wonder if eventually the speed and latency of USB will reach a point where SATA, for instance, becomes unnecessary. Or how about ethernet? We'll always need REALLY fast links like PCIe or dedicated ones like DVI. But when it comes to busses, perhaps it would be good to settle on one standard. I'm envisioning something along the lines of 100 gigabit ethernet, except all computers would have a whole bunch of dedicated ports, rather than just one network.

drive? It would only make sense that since its solid state it would be faster than our primitive hard drives with their moving parts... (yes, I know about SS Hard drives and don't have 2500 dollars to spare)

I've got a question that has been nagging at me for quite a while and was hoping someone here could phrase an answer in terms a mere mortal could understand.

Why are there so many serial specifications?

We've got, off the top of my head, SCSI, USB, Ethernet, FireWire, and SATA to name a few. I do understand there are different protocols (all the way up from the physical to the application layers). Different applications of these technologies permit some optimizations that might not be applicable in other situations. But, at some point, the underlying technology is fast enough

Still, I can't help but think there should be some common denominator that ALL these communications standards can agree on, and through economies of scale, become universal standard(s). It just seems like people keep re-inventing the wheel with an eye toward THEIR favorite.

I thought we were getting close when they released gigabit Ethernet over UTP (unshielded twisted pair).

can handle distances up to 100 meters

fast data rate (1000 Mbps)

supports lower data rates (100/10 Mbps)

development is underway for 10Gbps, too.

So, for the sake of argument, why not have all of our serial devices just support gigabit Ethernet? Sure, you'd need a hub or switch in your PC to talk to all of the devices, but you already need something similar for the other protocols (USB hub, SCSI controller, etc.). It's a well-known technology with many implementations and is widely available. I'd willingly pay a few more bucks for each device if I could ditch all of these incompatible formats and just standardize on one SET of ports and cables for hooking things to (and within) my PC. And in those cases where a different connector is desired (e.g. for small form-factor devices like a digital camera), let me just get an adapter cable/plug that I can plug into my Ethernet port.

Is there any good, technical reason that is keeping us from having truly UNIVERSAL serial communications?

I think the main problem is repeated insertion/removal vs. semi-permanent installation.

USB connectors are designed to be inserted and removed over and over. They're held in by pressure against the connector, so they can be removed without having to push a tab or twist the connector to remove it.

UTP cables are designed to be plugged in, and then generally left alone. The UTP cable in my computer bag is in terrible shape.. the RJ45 connector is coming loose, the plastic retaining tab is broken off (so the cable often pops out of the jack on its own), etc.

I have USB devices which I've removed and inserted hundreds of times, and the connectors still work reliably.

A bunch of other people have chimed in with parts of the answer already, and I thought I'd add some more.Obviously each manufacturer wants you to use their standard and buy their hardware.Different implementations came through at different times, and have different amounts of software/hardware overheads, and try to do different things.RS232 has almost no necessary software overhead -- you do any and all the work with code you write. USB has *quite* a bit of software overhead to deal with device identification, and Ethernet has *enormous* amounts of overhead. In most small systems you have to buy an Ethernet software stack separately from the OS you're using.USB tries to provide power. People are trying to glue power into Ethernet although it hasn't yet taken off.People keep going off in odd wireless directions.The fundamental problem, I think, is that there are several different connectivity needs and manufacturers are trying to get you to buy their solution to what they think are the most important needs. What you're asking for is something good for the industry in the long term, and that's not really in the direct interest of any particular company, so nobody's building anything for it.The military embedded market seems to be moving towards gigabit or 10-gig fiber ethernet for all their interboard communications, but fiber has its own problems, and I'm not sure it's the right thing for a USB key you're carrying in your pocket all the time.I believe that the SCSI module in linux handles firewire and USB, so from that standpoint it looks like it's a start towards universal communications, except for Ethernet. (Even though old SCSI is nothing like serial: it's the the ultimate expression of parallel communications, with some similarities to the old HP/GPIB parallel communication standard that's still used in for test communication but used to be a hard drive standard.) I have no idea what Windows does.

Is there any good, technical reason that is keeping us from having truly UNIVERSAL serial communications?

Yes.

Let me explain:

USB, per standard (host side), must be able to source at least 500 mA. Even though there is a PoE (Power over Ethernet) standard, most choose not to implement it by default. Hence, ethernet can't power devices like flash drives and hard drives.

USB uses four wires, ethernet twice as many. USB is a synchronous bus, meaning that there are (theoretically, at least) no collisions. A properly operating USB device will not stomp on someone else's data packet. Thus, for the given bitrate, a higher portion of the bandwidth is available to applications. Unlike ethernet, adding devices to the same physical connection will not degrade the overall bandwidth of the network. In practice, I've found 100 Mbit ethernet devices operating with a maximum throughput of about 35-40 Mbits/second because of collisions. And this was with *two* devices! To get a better throughput requires using routers (which minimize or eliminate collisions).

RS232 is a pretty universal serial communication standard. However, it is also slow.

There are tradeoffs between maximum cable length and the speed of the bus.

There are tradeoffs between the number of signal lines and the cost of the device.

There are tradeoffs between bitrate and the device cost.

So, the reason why we don't have a universal serial standard is because the different interfaces were designed with different goals in mind.

Apple was thinking along these lines when they proposed firewire. I am not a Mac expert, but from what I have seen so far, firewire can be used for networking (does it use ethernet, I am not sure) video transfer, peripherals connection, external drive connections. From the start it was fast enough and designed for all these applications while we still struggle to get USB working everywhere at full speed...

I do not look forward to the replacement of what was starting to be a reliable, ubiquitous standard that "satisficed" with a New! Improved! version that shows no signs of actually achieving significantly higher throughput with current devices. Why does USB have to compete with SATA? Why can't USB just be USB?

I've been seriously disappointed with the number of times I've interconnected USB 1.1 and USB 2.0 devices and had them almost work, only to encounter various strangeness and glitches. I don't know who's to blame... whether it's a fault in the standard or in vendors' faulty implementations... and life's too short to care, because know who's to blame wouldn't do much to help solve the problem.

On the whole, I blame the standard, because these days standards are so incredibly huge, bloated, and complex that it is extremely unlikely that anyone actually implements it fully correctly.

With today's sloppy practices of testing to the market ("Let's try it with the most popular devices, or the ones which are most important to our business") instead of testing to the standard, the result is all sorts of opportunities to build devices that comply with the standard but do things just a little differently than the most popular devices... and have them not work even though they "should."

A typical example was an IOmega external CD burner I bought once for a USB 1.1 Mac. (I chose it because it was $30 cheaper than a FireWire model, I wanted both PC and Mac present and future compatibility), and I didn't really care about speed. The drive actually burned perfect CDs, but it always claimed erroneously that an error had occurred. But how could a sane person rely on that? I returned it, bought a different USB 2.0 external CD burner from a different vendor... and encountered exactly the same problem.

I've also seen various glitches and strangenesses trying to use USB 1.1 thumb drives in USB 2.0 CPUs and vice versa.

I can't see any reason it wouldn't be backwards compatible...USB2 just can't utilize the fiber-optic component of the USB3 wire. And surely USB3 would be smart enough to know when a USB2 wire is plugged in, and would be capped at the old transfer rate (just like plugging a USB2 device into a USB1 port)

I think the fiber part will be inside the extra space in the connector in a way that doesn't interfere with the electrical part of it. Probably when you plug in a cable the electrical part asks the connection if it is USB3 capable and if the device responds yes, it turns on the fiber transceiver.

> like if you plug a usb 1.X device onto a usb 2.0 bus, then everything slows to usb 1.X. IINM...

This is wrong, if you plug a USB 1.1 device into a USB 2.0 "bus" then it does NOT slow everything down. Specifically there are 2 cases:

1. You plug a USB 1.1 device directly into your computer (i.e. directly into the "host controller"). In this case, the USB 2.0 host controller (technically a EHCI chip) does NOT talk to your device. Instead, the EHCI chip has one or more USB 1.1 host controller chips (technically either a OHCI or UHCI chip, and called a "companion" chip when inside a EHCI chip) and your USB 1.1 device is connected electrically to that controller. You device is not on the USB 2.0 (EHCI) bus.

2. You plug a USB 1.1 device into a USB 2.0 hub. In this case, the USB 2.0 hub creates a complete USB 1.1 environment specifically for your device. On the host-facing side of the USB 2.0 hub, all communication continues to take place at USB 2.0 (i.e. 480Mbps) speeds. When the host wants to talk to your USB 1.1 device, it uses what is called "split transactions" to talk to it. Basically (I'm simplifying), this involves sending a "start" packet to the USB 2.0 hub. Then, the USB 2.0 (EHCI) controller goes on to do other things, while the USB 2.0 hub initiates the transfer to your device at USB 1.1 speeds. And data transferred from the USB 1.1 device is stored temporarily in the USB 2.0 hub. Eventually the USB 2.0 (EHCI) host sends a "finish" packet to the USB 2.0 hub. If the USB 1.1 transation finished, the USB 2.0 hub responds successfully (either with the incoming data or a "ack" that the outgoing data was sent) which completes the transation.

(There is also a combination case of those, where the EHCI chip does not contain a "companion" USB 1.1 chip, but instead contains an internal USB 2.0 partial hub - the "transaction translator" part - that handles talking to USB 1.1 devices. For bus usage purposes, this is effectively the same as using an external USB 2.0 hub, since the USB 1.1 devices do not appear on the USB 2.0 bus.)

That's not so strange because one of the many ways Firewire is superior to USB is that each device has a hardware controller that negotiates data transfer over the bus independently of the CPU. USB, being a cheap-ass solution, relies on the CPU to do all that work, and is far more limited in a host of technical ways.

But the faster you push data on copper the more vulnerable to distortion and corruption the data becomes. Wires act like antennas and absorb em radiation from the computer and other sources. This is why gigabit ethernet has a very short distance the cable can cover vs 100mbps cables.

Light doesn't suffer from this problem and thus can handle faster data.

GE is limited due to the speed of light and the way the ethernet protocol works. A sender has to stop sending if it senses someone else talking on the same line. In order to do this, it has to detect the collision before it finishes sending. If the line is to long, a sender at each end will be able to get an entire packet out before being able to sense the first bits from the other end. Ugly things happen then. Google "CDMA-CS" if you really want to know more about what limits the length of ethernet.EM