If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. Registration is $1 to post on this forum. To start viewing messages,
select the forum that you want to visit from the selection below.

Just saying hello for now.
I've been lurking and reading the thread the last few weeks.
I finally got the registration button to work on the forum.

I bought my first SSD about the same time the testing started.
Went with the Intel 320 120GB.
The longer the testing goes the more I like it.
Enjoying the thread immensly and it looks like more fun to come.

Unless they continue making 34nm chips and just charge more for it? I mean the C300's are going for more than the M4's of similar size from what I've been seeing.

Ok just ran on my drive wow shocked at the results

21 months of use.... I would of figured I would of burned through it more!

wow.. I've had my X-25V drives for closing in on 15 months, mine are reporting 8142 and 8138 hours power on time, and 0.99 and 1.09TB writes...which is probably close to about 1/10 what I thought I have done on them so far and they have a MWI of 98, with available reserved space of 100 and a 10 threshold

Before we start the crucial test we need to agree on the config.
- How much should the ssd be filled with static data? 40 vs 64 GB
- What parameters should we use on Anvils app
- How much random vs seq?

What are people expecting the outcome between the C300 and the M4?
Should I open my C300 and ensure it's 34nm? I know there's been no controversy, but 34nm supply has to dry up some time....

I'm expecting them to exceed the 72TBW guarantee that Crucial has put on them, hopefully they'll get up there with the Intels.
As there is more NAND they should match the Intels but as we know, the controller can make the difference

I'm pretty sure there is 34nm NAND in it but maybe we should all open the drives that enter the Endurance test?

Originally Posted by Khoral

Only posting to say thanks to all the testers
This thread is a real mine of information

Appreciated

Originally Posted by deathman20

Ok just ran on my drive wow shocked at the results
21 months of use.... I would of figured I would of burned through it more!

I'm not shocked at all, I've got plenty of SSD's that are low on writes.
As long as it's used for normal tasks it just doesn't write as much as one thinks.

My default setup is w/o the pagefile, system restore and the hibernation is off as well, these things will make a difference.

Before we start the crucial test we need to agree on the config.
- How much should the ssd be filled with static data? 40 vs 64 GB
- What parameters should we use on Anvils app
- How much random vs seq?

Something i missed?

I'd like to know what johnw is using for his Samsung 470 64GB, that way all 64GB drives can be on the same test parameters.

In the absence of that info, I'd say 20-24GB of static data (for a 64GB drive) and default values in Anvil's app.

I'd like to know what johnw is using for his Samsung 470 64GB, that way all 64GB drives can be on the same test parameters.

In the absence of that info, I'd say 20-24GB of static data (for a 64GB drive) and default values in Anvil's app.

I mentioned in my post that I put a ~40GB static file on the SSD. To be precise, it is 41992617078 Bytes (I imagine that is a typical amount of static data for a 64GB SSD) And Anvil's app and data is on the SSD. All settings in Anvil's app are the default, except that I checked the box about keeping running totals for GB written (option just added yesterday).

For reference, the md5sum of the 42GB file is: 0d1c4ec44d9f4ece86e907ab479da280

This chart can go into the negative for wear out, but can only assume a negative MWI value based on average writes to date per MWI value.

It's also hard to see all the data when it is small. I had to take out all the hard data as it was too small. The Y axis only represents TB for One_Hertz. With more drives it will get a bit harder, but it would be good if all drives could be on one chart.

I mentioned in my post that I put a ~40GB static file on the SSD. To be precise, it is 41992617078 Bytes (I imagine that is a typical amount of static data for a 64GB SSD) And Anvil's app and data is on the SSD. All settings in Anvil's app are the default, except that I checked the box about keeping running totals for GB written (option just added yesterday).

For reference, the md5sum of the 42GB file is: 0d1c4ec44d9f4ece86e907ab479da280

Alright, got a file ready for myself, weighing in at 42,022,123,868 bytes. C300 64GB should start tomorrow so long as UPS sticks to their delivery date.

I'm also going to test a SF-1200 drive now that the prospect of no-LTT has emerged (and if anyone wants to test a 25nm vs. my 34nm, let me know...easier to arrange testing and setup in pairs!). With a Sandforce back on the scene, I wanted to examine the compression settings in Anvil's app and see if any were suited to mimic 'real' data. With the discovery of the 233 SMART value, we can now see NAND writes in addition to Host writes, so if we can also write 'real' data we can kill two birds with one stone: see how long a drive lasts with 'real' use and how much the NAND can survive.

So what did I do?

First, I took two of my drives, C: and D:, which are comprised of OS and applications (C:) and documents (D:, .jpg, .png, .dng, .xlsx probably make up 95% of the data on it) and froze them into separate single-file, zero compression .rar documents. I then took those two .rar files (renamed to .r files...WinRAR wasn't too happy RARing a single .rar file) and ran them through 6 different compression algorithms: WinRAR Fastest RAR setting, WinRAR Normal RAR setting, WinRAR Best RAR setting, 7-zip Fastest LZMA setting, 7-zip Normal LZMA setting, and 7-zip Ultra LZMA setting. I then normalized the output file sizes.

Doing this created two 'compression curves' showing how my real data responds to various levels of compression. My thinking being that if any of Anvil's data compressibility settings had similarly shaped and similarly sized (after normalization) outputs, it would be a good candidate to use to mimic real data and allow the use of 'real' data with SF testing. Real data != 'real' data; 'real' data is just the best attempt to generate gobs of data that walk, talk, and act like real data. A great candidate would be a generated data set that had a compression curve between the two curves from real data, across the entire curve.

The green zone is where the potential candidates should show up. Only one candidate was in that range, however: 67%. Unfortunately, it fell out pretty aggressively with stronger compression algorithms. So I turned off the "Allow Deduplication" setting and generated another 8GB file and compression curve and it was a little better.

While dedicated hardware can be magnitudes more efficient than a CPU with an intensive task, I do doubt the SF-1200 controller's ability to out-compress and out-dedup even low resource LZMA/RAR (R-Fastest and 7-Fastest), so the left-most part of the green zone is a stronger green as I feel that's the most important section of the curve. Unfortunately, I don't have the ability to get more granular compression curves at the low-end (left side) of the curve, so I'll have to make do with overall compression curves with just an emphasis on the low-end.
Of all the data I have available it does look like 67% compression setting with "allow deduplication" unchecked seems to be the best fit for use as a 'real' data setting for when I start testing SF-1200. Hopefully anybody else who plans to test a controller with compression and deduplication will find this useful as well

This chart can go into the negative for wear out, but can only assume a negative MWI value based on average writes to date per MWI value.

It's also hard to see all the data when it is small. I had to take out all the hard data as it was too small. The Y axis only represents TB for One_Hertz. With more drives it will get a bit harder, but it would be good if all drives could be on one chart.

I volunteer to chart data...I'll start working through this thread for the data

Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.

For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
Still it took 40minutes to produce that file using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
I'll do some more tests when I get a few more of the items off my to-do list.

First, I took two of my drives, C: and D:, which are comprised of OS and applications (C:) and documents (D:, .jpg, .png, .dng, .xlsx probably make up 95% of the data on it) and froze them into separate single-file, zero compression .rar documents.

Nice job, and beautiful graphs!

There are two more data points I'd be interested in seeing: the compression your Sandforce drive achieves on your C: and D: not-compressed archive files.

I guess you can measure it by just observing the SMART values on the drive, then copy one file to the drive, then look at the SMART values again to find the compression (assuming that attribute for actual flash writes is accurate). Maybe you have to delete the file and re-copy it several times to get an accurate measurement?

Looking at my tests on the SF2 controller it couldn't keep up with the ratio that 7Zip Fast(est) produces, not sure how the SF1 handles vs the SF2.

For reference, I 7Zipped one of my VM's earlier today (Windows Server 2008 R2, SQL-Server, + some data) and it ended up being ~50% of the original size using 7Zip Fastest.
Still it took 40minutes to produce that file using an W3520@4GHz on an Adaptec 3805 hosting a 3R5 volume, there is no way that the SF controller is able to achieve that sort of compression (on the fly) as 40GB is written at a rate of ~100MB/s using a 60GB SF1 drive. (based on steady-state)
I'll do some more tests when I get a few more of the items off my to-do list.

I have zero doubt hardware designed for compression/dedup could do twice (at least) what our CPUs do with just a 1W power envelope....but that doesn't mean the SF1 and SF2 controllers can do it. It's a safe bet they can't and their compression levels are weaker than the weakest RAR/7zip setting--too bad there's no way of running their compression levels on our CPUs to see what they can do with more precision than the 64GB (or 1GB SF2) resolution the SMART values give.

Almost done with the charts of all the drives so far (minus the V2-40GB...not sure whether or not to include that as testing essentially errored-out). Including a new chart with normalized writes vs. wear, which is kind of necessary considering drives of different sizes are getting entered into testing; writes will be normalized to the amount of NAND on the drive, not the advertised size.

Working on bar charts with writes from 100-to-0 wear as well as total writes done so far. 100-to-0 wear will be extrapolated until MWI = 0 and then frozen...so when MWI > 0, total writes will be less than 100-to-0 but after MWI hits 0, total writes will be greater than 100-to-0. Would "MWI Exhaustion" be a better name for the 100-to-0 bar?