Post Your Comment

63 Comments

I don't want to be PIA, I've asked this before, but there was no reply, so, here it goes:

Would it be possible to add some left and right margins to print page layout? I know it's meant to be printed, but I guess a lot of readers use it to read the whole article at once (me included), and it is slightly inconvenient to read without margins.

That's wrong, CSS has the capability of having a few different stylesheets. Most notably, there is one for "screen" and one for "print", which would apply here. All AT has to do is create a margin for that page in the screen css and set that margin to 0 in the print css.Reply

No, it is not. It is about the printer-friendly version of the page, which can still have both screen and print stylesheet as vol7ron suggested.Try to have a clue about web development before engaging in a conversation about it.

To the OP: you can use Greasemonkey or some equivalent (or even just make a javascript bookmarklet) and "fix" such minor things by yourself on any sites that you want.Reply

The site makes money based on either ad views or ad clicks. Clearly, they'll get less of both if everyone reads the text on a single page that has no ads.

I was going to suggest page zipper (FF plugin), but it doesn't work with this site, and even if it did, since they have feed back directly below each page, you'd have to get through every singe post to get to scroll to the next page (rinse/repeat for each page of text).

I think it'd be smarter for Anand to put the feed back after the last page and setup pages to work with page zipper.....we get a single page with all the text, but we also see all of the adds.

Just in case if you are using IE8 - open the Print view; then simply from the View menu select Style - No Style.You will get some small margins. Then adjust the window size as comfortable for reading.Reply

This is a very important question - nobody is interested in how quickly they can write zeroes to their drive. If these benchmarks are really writing completely random data (which by definition cannot be compressed at all) then where does all this performance come from? It seems to me that we have a serious problem benchmarking this drive. If the bandwidth of the NAND were the only limiting factor (rather than the SATA interface or the processing power of the controller) then the speed of this drive should be anything from roughly the same as a similar competitor (for completely random data) to maybe 100x faster (for zeroes). So to get any kind of useful number you have to decide exactly what type of data you are going to use (which makes it all a bit subjective). In fact, there's another consideration. Note that the spare NAND capacity made available by the compression is not available to the user. That means the controller is probably using it to augment the reserved NAND. This means that a drive that has been "dirtied" with lots of nice compressable data will perform as though it has a massive amount of reserved NAND whereas a drive that has been "dirtied" with lots of random data will perform much worse.Reply

My understanding is that completely random and uncompressible are not the same thing. An uncompressible data set would need to be small and carefully constructed to avoid repetition. A random data set by definition is random, and therefore almost certain to contain repetitions over a large enough data set. Reply

No; given a random sequence of 0/1 bits with equal probability of each, the expected number of bits to encode the stream (i.e. on average--you could, through extremely unlikely outcome, have a compressible random sequence: e.g. a stream of 1 million 0's is highly compressible, but also extremely unlikely, at 2^(-1,000,000) probability of occurrence).

In other words, a random, equal-probability stream of bits can't be compressed at a rate better than 1 bit per bit.

Of course, this only holds for an infinite, continuous stream; as you shorten the length of the data, the probability of the data being compressible increases, at least slightly--but even 1KB is 8192 bits, so compressibility is *hard*.

Just for example's sake, I generated a few (10 bytes to 10MB) random data files, and compressed using gzip and bzip2: in every case (I repeated several times) the compressed version ended up larger than the original.

I'm also not convinced by the way Anand has arrived at a compression factor of 2:1 based on the power consumption. The specification for the controller and Anand's own measurements show that about 0.57W of power is being used just by the controller. That only leaves 0.68W for writing data to NAND. Compare that with 2.49W for the Intel drive and you end up with a compression factor of more like 4:1. But actually this calculation is still a long way out because 2MB/s sequential writes are 250MB/s on the SandForce and only 100MB/s on the Intel. So we've written 2.5x as much (uncompressed) data using 1/4 as much NAND power consumption. So the compression factor is actually more like 10:1. I think that pretty much proves we're dealing with very highly compressable data.Reply

That should definitely be checked, as this is the first drive where different kinds of data will perform differently. Due to the extremely high aligned random write performance, I suspect that the data written is either compressible or repeated, so the drive manages to either compress or deduplicate to a large degree.

One other point regarding the IOMeter tests: the random reads perform almost identical to the unaligned random writes. Would it be possible to test both unaligned and aligned random reads, in order to find out if the drive is also capable of faster random reads under specific circumstances?Reply

Anand, do you therefore have any explanation for why the SandForce controller is apparently about 10x more efficient than the Intel one even on random (incompressible) data? Or can you see a mistake in my analysis?Reply

That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).

That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.

Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.

I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.

For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!

ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?Reply

In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.

So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.Reply

I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.

If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.Reply

I did test it. If you create the test file it compressable to 0 percent of its original size.But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)Reply

I created an 512kb seq. write IOMeter test pattern which writes to a space of 1GB. When you use IOMeter for the first time, it creates that 1GB file to reserve the space. I then stopped the test as soon as the 1GB file was written and before the actual test even began. I then used 7-zip to compress the file, that's the result:

It's in german and it says to the right that the uncompressed size is 336MB (I paused at that point) and the compressed file size is 404KB. So the level of compression is nearly 0%.

I aborted then and did the above test again. This time, I let the harddisk write data for about 11 seconds (HD does about 100MB/s) so the complete 1GB file has been used.I used again 7-Zip and this is the result:

Ive noticed this quite a bit on reviews here. When a benchmark is low and the text of the result doesnt fit in the bar the text gets squished into the text of the name of the item being tested. Could you please move those results to the outside of the bar off to the right? Since the bar is so small you will have plenty of room out there to put the result and it will be legible. Other than that thanks for a great review and thanks for still including a spindle disk or two as well (though I do question the decision to use a 5400rpm drive unless you were trying to throw that in for laptop users or something). Reply

Any chance of comparing the drives using compressed NTFS. I tend to do this to my SSD drive anyway and probably wouldn't see a difference on a drive that was trying to compress data internally that was already compressed).

Oh, my drive is only 30GB and I needed the space..quad core CPU, figured I wouldn't notice a difference speed wise.Reply

I'm curious as to whether you know at this point if there's going to be reviews of the OCZ Vertex 2 and Agility 2 as well?

Seeing as these drives are based on the SF1500 and SF1200 as well it'd be interesting to see the performance difference from drives from the same vendor, using the different chips. There's the Vertex LE of course but it seems it's more or less the bastard child in this comparison.

I was curious if it's possible to overclock these SSDs either through the SF1200 or the RAM in some way or another?

I know pencil modding has been dead in recent years. This is due in part to smaller components, but also to the fact that manufacturers have both (1) implemented in-place safeguards to reduce problems from overclocking and (2) started encouraging overclocking by providing more options through the BIOS.

I'm just curious if anyone's figured out how to do it on these SSDs. I know the average shouldn't - you typically don't want to fiddle with the sole thing that stores your data - but I suspect some tweaking could take place by enthusiasts to really up performance, if wanted. Anyone know a place to look into this?

Thanks for the power consumption charts, Anand! Any chance you can throw in a typical 2.5" 5400RPM HDD that usually comes stock in most laptops as a reference point for those of us who are thinking of upgrading? Also, could keeping Device Initiated Power Management disabled account for the significant discrepancies between your numbers and the recent article on Tom's HW? (ie: Tom's got an idle of 0.1W for the intel drive- a lot better than the competition)http://www.tomshardware.com/reviews/6gb-s-ssd-hdd,...Reply

The Nova seems to do surprisingly well under Anand's heavy workload test, compared to other Indillix-based drives... Altho it's performance is just average (and similar to other Indillix drives) in most other tests. Isn't the Nova essentially the same thing as the OCZ Solid 2? (and the G.Skill Falcon II) That drive has been priced VERY competitively from what I've seen, I'm surprised there isn't more buzz around it. Looking forward to Anand's review of the Nova.

I might be buying soon as a gift for my sister, she really needs more than 80GB for her laptop (80GB X25-M is still the best bang for the buck out there imo), so a $300 120GB is right up her alley.Reply

"The Mean Time To Failure numbers are absurd. We’re talking about the difference between 228 years and over 1100 years. I’d say any number that outlasts the potential mean time to failure of our current society is pretty worthless."

Was wondering if the charts could be fixed to correctly display the HDD values for random ops?Something like if the value doesn't fit in the bar it should be displayed after the bar, to its right... and not to its left as it does now colliding with the name of the HDD.

BTW, that was actually a factor I'm looking into seeing in SSD reviews... how good are they at the really useful operations, not that sequential doesn't matter, that much.Reply

the OCZ LE isnt using the SF-1500only the vertex 2 pro is using the SF-1500 and the vertex 2 is using SF-1200the LE has a chip that sort of in the middle of the 2 .........look it up on the OCZ forms you will see Reply

Needless to say the author is completely wrong about what a MTBF number means. That number has nothing to do with infant mortality rate nor with life expectancy. It is the statistical failure rate over a large number of units operating in their prime life period.Reply

Hey, I COMPLETELY agree with the printing thing....I hate articles that have "pages", I'd rather view a 10 page long single document....yes, I'm that frickin lazy. You shouldn't remove the pages, but offer a FULL view option? I use the print option sometimes as well just to read an article all at once.Reply

"Performance is down, as you'd expect, but not to unbearable levels and it's also pretty consistent."

Why is performance down? Why should we "expect" this? Do I have to read every SSD article you've written previously to understand new articles? Or is there one big article that has all the info that you obliquely allude to in subsequent articles?

Where can I find the current recommendations for SSDs (SandForce vs Indilinx vs Intel vs Micron vs Samsung, latest firmware updates, etc.)? Is there a central repository of SSD information that is assimilated and arranged categorically (for easy research), or must all this info be followed like a blog?Reply