Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Vigile writes "New solid state drives are released all the time, and the performance improvements on them have started to stagnate as the limits of the SATA 3.0 Gb/s are reached. SATA 6G drives are still coming out and some newer PCI Express based drives are also available for those users with a higher budget. OCZ is taking it another step with a new storage interface called High Speed Data Link (HSDL) that extends the PCI Express bus via mini-SAS cables and removes the bottleneck of SATA-based RAID controllers thus increasing theoretical performance and allowing the use of command queueing — vital to high IO's in a RAID configuration. PC Perspective has a full performance review that details the speed and IO improvements and while initial versions will be available at up to 960 GB (and a $2800 price tag), in reality, the cost-per-GB is competitive with other high-end SSDs when you get to the 240GB and above options"

From the website: 'Whatever you do, don't plug an HSDL device into a SAS RAID card (or vice versa)! '

Although I dislike proprietary connectors for generic signals, I dislike interchangeable connectors for different signals even more. Can someone with a bit more knowledge explain why this could ever be a good idea, or how this is not going to smoke hardware.

I haven't checked the details, but I'm willing to bet that the physical differential signaling levels used for PCIe (LVDS) and SAS/SATA are pretty similar. As long as they at least kept the transmit/receive pairs in the same place, I bet that plugging in the wrong type of device will probably just cause error reports from the controller or at worst severely confuse the device and/or controller, but won't cause any permanent damage.

From what I gather it was cheaper and quicker for OCZ to co-opt an existing physical standard than roll their own. All the customer needs to do is source good quality SAS cables, which are in plentiful supply.

It's bone-headed is what it is. It's like some manufacturer saying "our notebook is going to start supplying 110vac at these connectors that just happen to look like USB host ports. Whatever you do, don't plug USB devices into them!"

I know why they've done it, though: it's expensive in time and labour designing and testing new connectors before going mass production. It's saving $ for them. And it'll bite the customers when they plug the wrong devices in and find out they've blown their warranty along with

Wow, what a clueless post. SATA-150 can't sustain more than 150MB/s and there's many SSDs that go beyond that. The fastest Crucial even goes beyond SATA 3 Gbps on sustained reads. Working for a HDD manufacturer or something?

Looks to me like one of them is breaking 600MB/sec which is faster than even SATA-3 can handle.

None of this is to mention access time/overhead which is another reason to go to PCIe directly. Rather than doing PCIe -> SATA -> drive's controller, cut out the middle man. I'm not saying it is the best idea in all cases, but it seems to work when performance needs to be the absolute highest.

Just remember, 3Gb/s converts to 375MB/s, so maxing it out really isn't too bad. The current Crucial RealSSD 300C tops out at 350 MB/s. That's an MLC drive; an SLC drive has the potential to be double that speed. By the time you get past the SATA overhead, you're definitely maxing out the bus with that drive on a SATA1 connection.

But those are not SATA, as far as I know. They are PCIe SSDs, which is essentially what you're building with the IBIS solution. Rather than packaging both together on a board, you're separating the actual storage from the PCIe "controller" and sending the signaling over a cable.

Given the choice between the two, I'll opt for the solution that lets me get a controller with the number of ports I want. This opens up the possiblity of doing RAID, as in their 4 ports/4 drives solution. It may seem silly to do tha

Yes great point. With this system you can grow. No stranded 1st gen pcie card, that wont work with 'the next version' of the same brand of card in the slot next to it.
With this you just keep on pushing in as many SSD's as you need.

As of April 2010 mechanical hard disk drives can transfer data at up to 157 MB/s, which is beyond the capabilities of the older PATA/133 specification and also exceeds a SATA 1.5 Gbit/s link. High-performance flash drives can transfer data at up to 308 MB/s which exceeds a SATA 3 Gbit/s link.

There is a whole cluster of consumer drives today pushing ~275MB/s out of sata 3gb's 300MB/s limit. That's safely within the range of 'sata limited' allowing for a very small amount of controller overhead.

There is a whole cluster of consumer drives today pushing ~275MB/s out of sata 3gb's 300MB/s limit. That's safely within the range of 'sata limited' allowing for a very small amount of controller overhead.

Though it's highly questionable as to whether to is any sort of meaningful "limit" in real-world usage.

You sure done fucked that one up, Sparky. The SATA 6Gb/s works out at about 768MB/s, which while decent, isn't good enough for many different uses. SATA 1.5Gb/s is only just over 140MB/s, something easily surpassed these days.

Probably. The point is that it's a whole new drive interconnect. They have another product that is a standalone card which supports 4 drives in a RAID. These drives only come with a card because it's a new interface technology and they are assuming you won't have a port for it yet.
It's an open standard so they are gambling on it eventually becoming the standard for SSDs and having it built into motherboards and such.

The illustrations all seem to show an x8 card, but I think what they're saying is they multiplex a PCIe lane over each pair in the SFF-8087 cable. So, eventually you'll be able to run x16 out of a card to your drive bay, and use that now for a 4x4 config, but perhaps a single x16 config in the future.

In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables [xtreview.com]). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it will be there - OCZ could also update to the faster coding rate.

I was referring to part number 170-BL-E762-A1, however, it does not have ECC. It claims 4x SLI, however, that is because most video cards that are SLI require 2 slots eating up the slot between them, this board actually has 7 x16/x8 slots.

That's a pretty sweet rig - I've got some friends doing scientific computing who can't get enough GPU in a system - they'd probably like this.

I have to say the EVGA site is very glitzy but not terribly helpful. I downloaded the 'spec sheet' and it was a 1-page advertisement. Sigh. Newegg's specs says 3 of the slots are x8 but they look like x16 on the picture.

Even the ZFS guys insist on ECC for storage, but for a monster compute farm this looks awesome.

In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it wil

Just as I mentioned, the 250MB/s SLC cache drives. The zpool behind them is pumping 700MB/s out. It would be nice to have a big cache that could exceed the speed of the disks. The PCIe 1500MB/s cache drives do that.

Sorry, I meant ~2400MB/s or ~2.4GB/s (which is 24Gbps divided by roughly 10 to take into an account protocol overhead - that's the rule of thumb in SCSI, you get at most 400MB/s out of 4Gbps FC or 300MB/s out of 3Gbps SAS).

Not confusing bits and bytes, just a typo.

In any case, this thoughput is currently theoretical - best SAS HBAs are 8x PCIe 2.0, and limited by its bandwidth of 20Gbps (which is also divided by 10 because of 8b/10b encoding).

Yup. Compared to other ~250GB SSDs the $739 price tag does not look so bad. Doing a quick price check, in USD less VAT I have to pay $600+ anyway. For that extra $100 you get a the connector card and a built in RAID, which is roughly what it'd cost you to get a 4 port RAID controller and 4 regular SSDs to RAID. On the other hand, there's not much real reason to get this over a RAID setup either, but I'm guessing they're trying to push this connector out there. If they can get it rolling and start building "

Don't forget that another reason to move away from SAS/SATA and towards PCIe is to break away from current restrictions in RAID controllers. This setup looks targeted at Enterprise RAID. Enterprise RAID setups, including LSI Logic's megaraid (H700/H800 from Dell) can't support things such as NCQ or SMART, which are really important features on many traditional Hard Drives or TRIM for SSDs. Support of NCQ would be required to hit higher transfer speeds in an SSD RAID setup than what we are able to hit tod