If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. Registration is $1 to post on this forum. To start viewing messages,
select the forum that you want to visit from the selection below.

Any questions or comments before I begin the test on May 24, 002011 @ 12noon EST?

Wow, that is a lot of money. Do you have a sponsor?

My only comment is that you are missing an Intel 320. That is an interesting drive since it is potentially the most reliable of all consumer SSDs, since it starts with the already reliable X25-M design and adds on XOR parity redundancy. I think it is the only non-Sandforce SSD that uses redundancy.

Any questions or comments before I begin the test on May 24, 002011 @ 12noon EST?

Good luck. PE longevity is of course dependent on a wide range of factors. Free space, span of writes, speed of writes, xfer sizes, alignment of writes. Tests such as these can only tell you how long NAND will last in a particular set of circumstances. Very useful non the less.

Regarding reliability SSD's haven't been around long enough for long term statics, plus all aspects of the technology are evolving rapidly and at the same time. Add to that the evolving and competitive nature of the industry, which is pushing SSD's on the market that are failing due to a lack of technology maturity and compatibility. The later being mostly responsible for perceived high failure rates.

On the other hand I believe the highest cause of failure for a HDD is mechanical damage. Here SSD's provide a significantly more robust solution with a significantly lower likelihood of failure.

Overall SSD's are a more robust design and in theory (at least) less likely to fail, but not all SSD's are made the same.

Personally I feel very safe using an SSD, but for long term data storage I would only trust a HDD. That primarily comes down to the fact that if a HDD fails there is a lot more chance of getting data of it compared to an SSD.

My only comment is that you are missing an Intel 320. That is an interesting drive since it is potentially the most reliable of all consumer SSDs, since it starts with the already reliable X25-M design and adds on XOR parity redundancy. I think it is the only non-Sandforce SSD that uses redundancy.

No I do not have a sponsor and unfortunately I will not have enough funds to get the 320s until next month. Which will skew their results.

Originally Posted by Ao1

Good luck. PE longevity is of course dependent on a wide range of factors. Free space, span of writes, speed of writes, xfer sizes, alignment of writes. Tests such as these can only tell you how long NAND will last in a particular set of circumstances. Very useful non the less.

Regarding reliability SSD's haven't been around long enough for long term statics, plus all aspects of the technology are evolving rapidly and at the same time. Add to that the evolving and competitive nature of the industry, which is pushing SSD's on the market that are failing due to a lack of technology maturity and compatibility. The later being mostly responsible for perceived high failure rates.

On the other hand I believe the highest cause of failure for a HDD is mechanical damage. Here SSD's provide a significantly more robust solution with a significantly lower likelihood of failure.

Overall SSD's are a more robust design and in theory (at least) less likely to fail, but not all SSD's are made the same.

Personally I feel very safe using an SSD, but for long term data storage I would only trust a HDD. That primarily comes down to the fact that if a HDD fails there is a lot more chance of getting data of it compared to an SSD.

If I had a laptop I would feel much safer with a SSD.

I'd feel much safer with a good backup policy.

Fast computers breed slow, lazy programmers
The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.http://www.lighterra.com/papers/modernmicroprocessors/
Modern Ram, makes an old overclocker miss BH-5 and the fun it was

Hi Anvil, 1 hour in now and I've noticed that after a couple of seconds of a new loop starting that the app seems to hang for ~ 2 or 3 seconds. The MB/s then goes down to ~40MB/s but then speeds pick up as the loop runs. I'm seeing variations from 40MB/s to 180MB/s as the loop finishes.

What you are seeing is the app is waiting for the OS to finish deleting the files, I'm seeing the same thing here.
From the hIOmon sessions you might remember that the Vertex 2 caused a ~1 second delay while TRIM was purging a single large file.

I've created a new version for you with the option to select a fixed compression level or to randomize compressibility, the latter would be great for the SF drive.

I tested the random compressibility option last night and I ended up with about 16TB/day using 2R0 Vertex 2 60GB. (that raid has about 15GB free space and it has been full for some time)

I'll email you the new version within an hour or so, I've made some adjustments so the not responding message should be gone.

Hi Anvil, Maybe it would be good to put a summary table on the 1st post to make it easy to compare various milestones?

I have to say so far I am impressed with the Vertex. 3 hours of writes (1.4TB) and although speeds are varying a lot between loops they seem to be staying within the same boundaries. By now I would have thought to have seen evidence of throttling.

EDIT:
LOL, to put a perspective on the writes the X25-M I currently use for my C drive only has 1.32TB of host writes, which occured over 1,263 hours ~ 54 days 24/7. I've already written more than that in 3 hours. (1.5TB)

I'll see what I can do about that summary.
PM/email me what you'd like to have in that summary.

Yeah, I don't think most people are actually getting how much data is written.
I've got one Kingston thats been running for more than 10,000 hours and its still short of 1TB Host Writes. (running as a boot drive on a server, not much happening but still it's running 24/7)
I'll check the two other 40GB drives I've got (both Intels), they are both used as boot drives as well but not running 24/7.

I'm pretty sure that 10-20TB Host Writes is what most of these drives will ever see during their normal life-span (2-3 years), unless they are used in non-standard environments.

You could use 8% (Database) which is still easily compressible, it will make an impact on the TB/day rate but I'm pretty sure it will be more correct vs the Intels.
If the impact is too high (it really shouldn't be) then you can still select 0-Fill or return to the old app. (there is a tab for settings in the new version)

Maybe you should give hIOmon a shot as well, just a few minutes would be interesting.

edit:

Here's the X25-V's that I'm currently using in my main rig (the 980X)
A lot of activities like surfing but no large apps are installed, they all run on the Areca in VM's.
VMWare Workstation is of course installed on the X25-V, no pagefile though as there's plenty of memory.02M3_980X_hidden_sn_2011_05_21.JPG

I had to reboot to get hIOmon running on the OCZ drive, so the stats below only cover a 0.5TB. Plenty of TRIM activity, but nothing that shows anything more that a ~ 0.4s delay.

I'm a bit taken a back. 3TB of writes in 6 hours and still on 100% life. No sign of a significant slowdown either.

Edit: Anvil, a summary of writes vs wear out % for each drive in the 1st post would be handy. I've got a feeling this will end up being a very long thread, so having something in one place will make comparisons a lot easier.

Hmmm ~500GB, ~900.000 IOPS - that's ~512KB average write size - Anvil, isn't that a big too big for randomness?
3TB written - perhaps it was compressed (which considering the speed of 140MB/s I would guess it is), so the actual NAND use is much less?

A file is written either using 1 io or using 3 ios, max blocksize/transfersize is 4MB, it needs some more explaining so here we go.

50% of the files are created using one write operation with a max blocksize of 128KB, as for the other half
each file consits of exactly 3 WriteFile operations, each IO can be up to 4MB in size, so it is sort of random. (even though the data is written sequentially)

edit:
Forgot the first one
This test is single threaded, so, 1 single file at a time.