Posted
by
timothyon Thursday September 01, 2011 @03:43PM
from the more-like-a-skirmish dept.

Deathspawner writes "Think that all SATA 3.0 (6Gb/s) controllers are alike? As Techgage explores, that's not the case. While most SATA 3.0 controllers do deliver the performance promised, the most popular offering on the market does not — at least where bandwidth-busting SSDs are concerned. The controller comes from Marvell, and was bundled on all motherboards prior to AMD and Intel launching their own SATA 3.0 solutions. In some cases, Marvell's controller is half as fast as the others, making it no better than a SATA 2.0 controller. For those with motherboards using a Marvell controller, the solutions are few; build a new PC, or invest in a super-expensive add-in card."

It's not always the fault of the controllers, it can also be the way they're connected to the system.

These onboard controllers are connected to the system using PCI Express x1 - it's literally just like plugging them into a x1 slot only they're directly on the motherboard. The problem is there are two versions of PCI Express - the older PCI Express 1.0 provides 250 MB/s in each direction, while PCI Express 2.0 provides 500 MB/s in each direction.

AMD motherboards only had PCI Express 2.0 lanes but Intel had a mix of 2.0 lanes and 1.0 lanes - the most common was 32 x 2.0 lanes (for 2 x x16 lanes for graphics cards) and about 6 x 1.0 lanes coming from the southbridge. So motherboards manufacturers had to either use 1 lane from southbridge and get only 250 MB/s in each direction or resort to using some multiplexing chips that take 2 or more lanes and create a x4 path for the controller. More recently, motherboards detect if there is a card on the second pci express x16 and if there's nothing there, they "borrow" a few of those unused lanes to improve the performance of the various controllers integrated on the motherboard.

But the point is even if the pci express 2.0 is used, there's only 500 MB/s in each direction, SATA 6 gbps means that a maximum of 750 MB/s should be reachable - very few motherboards connect the controllers to more than one 1x lane so even if the controller could reach 750 MB/s, you won't get it.

This is nothing new - remember the gigabit network cards on PCI? The whole PCI system on your computer can do 133 MB/s and a gigabit link can do about 110 MB/s - would you sue anyone if you plug 4 pci cards in your system and can't reach a throughput higher than 133 MB/s ?

Generally speaking if you want SATA-III to operate satisfactorily you need to use the AHCI controller built into the cpu chipset bundle. That is, the one that Intel and AMD bundle. That will get you a reliable 32-tag-per-port controller. You definitely do not want to use an external controller or a third-party chipset controller (aka Marvell), at least not if you can help it. You won't have a choice if you want hardware RAID, AMD and Intel's controllers don't do RAID (BIOS-based fakeraid doesn't count).

All chipsets have bugs, even AMD and Intel chipsets. Intel AHCI controllers have problems probing Intel SSDs (go figure) and require a driver workaround to unbrick the port when the problem occurs during probe. AMD chipsets don't mask phy errors during initial training, which creates a lot of superfluous interrupts. Both controllers play fast and loose with the AHCI spec and the AHCI spec itself is pretty badly designed, with tons of issues (though not as badly designed as the immensely idiotic USB HCIs).

Another big problem is that the firmware controller that runs the chipset side of the AHCI is typically responsible for ALL the SATA ports, which means that hotplug on one port can actually interfere with operations on another. It pisses me off, but there's no avoiding it.

The external chipsets are even worse. Marvell is a joke. Silicon Image chipsets are full of HARDWARE bugs (not just firmware bugs) which require a lot of workarounds in driver code (for example, you can't abort a soft-reset sequence reliably on a SIL chipset and you can't access the on-chip shared memory while commands are in progress without corrupting any DMA that happens to be occuring).

The stuff is getting better, slowly. The manufacturers of these chipsets have traditionally not really cared about these sorts of bugs because 99.9% of their users are consumers who don't care. The remaining 0.1% professionals who do care aren't a big enough crowd to make the manufacturers actually fix their firmware.

SATA at least has the AHCI spec, too bad more chip manufacturers don't use it. If you want to talk wireless and ethernet chipsets matters are far, far worse.