If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. Registration is $1 to post on this forum. To start viewing messages,
select the forum that you want to visit from the selection below.

'Hey everyone here (everyone but me) I think it would be great if someone here (anyone else but me) could test the endurance of anything I suggest.'

Sorry, hate to be the a$$hole here, but had to get it off my chest. If I'm out of line, then I appologize and will take the punishment.

Anyways, thanks to all the "Testers" here and everyone else who has contributed and helped out tremendously. And Anvil for his awesome Utility.

It takes alot of time and effort to do all this and I say thanks!

Well, if you were ( like me ) a student in the UK with potentially 50 000 GBP debt from university and a 20% chance of being unemployed after finishing my degree you would understand why I cannot test anything myself

The 25nm NAND in the Intel 320 has a minimum P/E cycle rating of 5000. We really don't know what the average/typical/expected lifetime rating is, just minimum.

Yes the actual PE cycles comes out as a bell curve distribution..
when rated at 5000 PE cycles, that's covering the vast majority of the devices at a stated ECC level and data recoverability.
The NAND is also rated at 5000 PE cycles for a given ECC level.
If you use more bit error correction than spec'd, you get 'higher' PE cycles.
If you use less bit error correction than spec, you get 'lower' PE cycles.
Note that the NAND doesn't just quit working, you are constantly just increasing the raw bit error rate probability of the NAND,
increasing the probability of the NAND returning data that is uncorrectable by the controller's ECC/data recovery algorithms.
That being said, it is possible for a SSD to lose data on any PE cycle prior to its NAND rating, only the probability is very low of that occuring.
Most of the same error rate probability stuff goes for HDD, only their bit error rate progression is more of a linear progression over time..

Originally Posted by Christopher

They both use Intel 34nm, but have different product numbers. RyderOCZ was kind enough to tell me which NAND they used:

New
JS29F32G08AAMDB

Old
JS29F64G08CAMDB

I'm not sure what those bolded numbers represent
...

32Gbits/chip and 64Gbits/chip. Both use 32Gb dies.
First one is Single die 1CE, Second one is dual die 2CE.

Originally Posted by bulanula

It would be interesting to bring some SLC drives into here just so we can compare if they really do last 10 times as much as the MLC drives etc.

Originally Posted by sergiu

I totally agree, but... if there is no electronic failure (controller, ram buffer, SATA interface, etc) and recovery time is indeed as in the model reposted some posts above, I am afraid we would need to leave the test as legacy to our grandchildren.

Or you could leave the test to the SSD engineers...
If you have low-level access to the drive, you can write your own basic firmware that has no wear leveling and records certain ECC correction information.
Then you can just hit logical flash block 0 with 100,000 PE cycles (or the actual data failure point), then block 1, block 2, etc,.
You can also pull raw bit error rate data versus PE cycles.
There are some engineering flash testers that do this for you on the market today, but its not within an enthusiasts budget..
However, we actually end up doing this testing with fully built SSDs, so that we can test the flash in extreme conditions (industrial temp conditions -40C to 85C, thermal cycling, thermal shock, voltage margining, EM interference, radiation bombardment..)

Well, if you were ( like me ) a student in the UK with potentially 50 000 GBP debt from university and a 20% chance of being unemployed after finishing my degree you would understand why I cannot test anything myself
...

What are you studying? You might be able to turn your hobby into a senior project or something..

Nice idea with the RAID array. But maybe just one 20GB Larsen Creek will be enough for this test.

I 'm surely not suggesting that would be an effective test to stripe three of those (rather, just fun to play with), but with every passing day I get less and less concerned about effective MLC lifespan. Even Indilinx controllers, which started out with a shaky track record, have become more and more effective with every firmware release. That's why I think it would be years before you could put a dent on a Larsson Creek -- unless everything we
ve been told about SLC is wrong (and it could be wrong the other way -- 2x as many PE cycles in practice).

'Hey everyone here (everyone but me) I think it would be great if someone here (anyone else but me) could test the endurance of anything I suggest.'

Sorry, hate to be the a$$hole here, but had to get it off my chest. If I'm out of line, then I appologize and will take the punishment.

No, you're right. It does get old really fast (like, months ago). Considering that an SSD for testing can be purchased new for $100, just about anyone who has internet access should be able to afford one by saving up their spare change for a few months, or skipping eating out or a movie once in a while.

No, you're right. It does get old really fast (like, months ago). Considering that an SSD for testing can be purchased new for $100, just about anyone who has internet access should be able to afford one by saving up their spare change for a few months, or skipping eating out or a movie once in a while.

I may start again with either a 64GB Samsung 830 or a 60GB Intel 520 when they come out later this year. They should both be relatively fast to deplete the write cycles. Or, at least, the 830 should. I'm still not clear on the specs for the Intel 520.

yeah, no kidding
I am also incredibly impressed with both the results and willingness of the participants, it sure has taught me a lot about not needing to baby my SSDs nearly as much as I have been, lol. on a slightly OT to this- does anyone know of any good methods of running drive maintenance on raid0 X-25Vs...or is stripping them out of a raid config to run TRIM pretty much the only viable method to restore "factory fresh" type running conditions?

yeah, no kidding
I am also incredibly impressed with both the results and willingness of the participants, it sure has taught me a lot about not needing to baby my SSDs nearly as much as I have been, lol. on a slightly OT to this- does anyone know of any good methods of running drive maintenance on raid0 X-25Vs...or is stripping them out of a raid config to run TRIM pretty much the only viable method to restore "factory fresh" type running conditions?

I short stroked mine, but I use the array just for a couple steam games like Civ 5, Deus Ex, and New Vegas. The Intel controller is really robust and good at handling life without trim. If you have much free space on there at all it shouldn't really get bad, but one option is to copy some very large files to the drives. The sequential file writes will level everything off. You could copy over some digital videos or perhaps .ISO files, then delete them (but you'd know if your performance was in the toilets, so save this for a rainy day). If you are running your OS on the drive, then it will get more beaten up without TRIM, and the X25 V is disadvantaged due to its size, but the Vs are pretty tough.

No hints needed. Its got to be M4. After the firmware upgrade 009, the seq read speed has gone up quite a bit and write latency has reduced. M4 is the best drive out there right now.

Hmmm...

Not quite.

I'm not really interested in sequential read speeds(For testing purposes), but it is kinda cool that Crucial is bringing out new FW.
Also, there is already a M4 in the test. And a C300. I had a M4 64GB too, but I put it in my Mom's laptop last time I flew home. It was genuinely, proper fast. It made me feel dumb for paying more to get a 510.

I'm not really sure I should give a hint... I don't want to jinx it... Once it ships out, I'll clue you in. I have faith in UPS and FedEx.

I would feel supremely idiotic to name the drive, and then the etailer screws up (had this happen recently).

I don't think it will be an issue, but just to be on the safe side I swear this oath:If I don't get this particular drive by Monday (maybe Tuesday), I'm starting the test with my Intel 510 Monday night (maybe Tuesday??)

The 510 only has 593GB of host writes on it, but only had 498 before I started playing with AnvilPro's endurance testing yesterday. It's pretty fast, but since it's 120GB, it's only about the same as avg write speed : capacity as the M4 64GB.

I'm not really interested in sequential read speeds(For testing purposes), but it is kinda cool that Crucial is bringing out new FW.
Also, there is already a M4 in the test. And a C300. I had a M4 64GB too, but I put it in my Mom's laptop last time I flew home. It was genuinely, proper fast. It made me feel dumb for paying more to get a 510.

I'm not really sure I should give a hint... I don't want to jinx it... Once it ships out, I'll clue you in. I have faith in UPS and FedEx.

I would feel supremely idiotic to name the drive, and then the etailer screws up (had this happen recently).

I don't think it will be an issue, but just to be on the safe side I swear this oath:If I don't get this particular drive by Monday (maybe Tuesday), I'm starting the test with my Intel 510 Monday night (maybe Tuesday??)

The 510 only has 593GB of host writes on it, but only had 498 before I started playing with AnvilPro's endurance testing yesterday. It's pretty fast, but since it's 120GB, it's only about the same as avg write speed : capacity as the M4 64GB.

Oh, you are planning to participate? Hats off to you buddy! Yeah, it would not be useful to have another M4 in this test.

The drive isn't even on the manufacturer's web site, isn't listed or is listed with a "usually ships in 2-4 weeks" at the few etailers that carry the series -- and I think the etailer that I placed the order with doesn't really have them. So I put in the order anyway, and I'll just have to wait and see. That's why I didn't want to mention it. I'm actually incredibly excited about this particular drive. I probably shouldn't have even mentioned it, but I couldn't help myself.

I think I might be addicted.

I will curl up into the fetal position and cry like a little baby if they don't have this drive sitting in their warehouse in West Nowhere, just waiting to get on the truck.

I short stroked mine, but I use the array just for a couple steam games like Civ 5, Deus Ex, and New Vegas. The Intel controller is really robust and good at handling life without trim. If you have much free space on there at all it shouldn't really get bad, but one option is to copy some very large files to the drives. The sequential file writes will level everything off. You could copy over some digital videos or perhaps .ISO files, then delete them (but you'd know if your performance was in the toilets, so save this for a rainy day). If you are running your OS on the drive, then it will get more beaten up without TRIM, and the X25 V is disadvantaged due to its size, but the Vs are pretty tough.

I have my two drives limited to a 60GB array (so some reserve space there on that front), and its sitting with 10.5GB free space (trimmed down win7 install...I need to reinstall to be able to install SP1 one of these days +autodesk programs)
judging by a recent CDM 2.2 when comparing to virtually brand new I'm looking at...
sequential:-110MB/s reads, -8.37MB/s writes
512k:-33.9MB/s read, -25.98 MB/s write
4k:0.12MB/s read, -9.97MB/s write
one drive has 9710 power on hours, the other 9748 and 1.20TB and 1.32TB written respectively and most of those hours (give or take maybe 100) have been in a RAID array in win7
im sure there's also been a firmware revision since I installed my drives

I know the drives are pretty tough little cookies, I'm just now wishing I could get myself a third or fourth, lol