My first reaction to this article was... what a loss to see a good article topic with no proper tests. Nothing. I mean what the heck guys, can it get any easier? Just pick a file from one directory and copy it to another on the same partition and to another to show us if there was any performance change from short stroking and by how much. Who cares what number some application no customer installs or has a daily use of says about speed?

Run some applications that are disk limited like compression, game loads, encoding, PCMark HDD Suite, etc, show us what we will gain by using this method!

But sheesh, at least give us something that relates to reality. These guys sound just like the marketing teams. I'm sat here looking to see some file create, write and read tests of various sizes and I still have nothing.

That is pretty hilarious, the real performance does not change at all but of course, if your metric is the average between fastest and slowest and you eliminate all but the very outermost tracks, then your drive will, on average, look faster on paper.

This is the kind of ignorance I have battled when I was doing some work for Seagate many years ago when PATA drives with shorter platters were measured against SATA drives with longer platters and the conclusion was that PATA was noticeably faster than SATA (in that case it was Maximum PC).

Here is a graph that shows essentially what THG is doing except that they clip a TB drive to 33 BG/platter, which probably gives them some outermost 10% of the entire number of tracks.

To repeat my statement from above, the drive performance doesn't change at all, only the average of all accesses changes because 90% of the capacity is thrown away. You can get the same performance by changing the "seek" parameters.

I think what you find in THGs article is extremely common in all facets of online analysis and testing, by forum members and article reviewers. They tend to use benchmarks/techniques but come up with a misleading, incorrect analysis and conclusion, and their explanations of what is happening and why are usually flat out wrong. That is one reason I ignore many articles and the credibility of authors of such articles is nil in my eyes.

In effect what they are doing is what we've done for a while, which is if you have too much extra unwanted space you don't need, you decrease the drive size to only use the outermost drive tracks hence reducing seek times by a large margin, which means the performance perceived where any seek is involved, is enhanced. The actual drive performance remains largely fixed but the faster performance comes from reduced access latencies with the much smaller head movement involved.

Synthetic apps will in these cases largely exaggerate the benefits but they are not apps I rely on since I don't run them as my work. Simple heavy read/write daily operations will be enough to show me any benefits I might gain from such a technique.

I'll be running a test on this myself soon. I have a WD Caviar Blue 16MB 250GB and WD Caviar Blue 16MB 320GB (single platter) that I'll be comparing.

BTW, my 2001 Maxtor 40GB 2MB PATA is still alive and kicking.

6a.png

I did quite a lot of heavy testing on it just the other day to compare with some newer drives. Out of all the tests, only FC-Test, IOMeter and PCMark were of any reasonable value and realistic validity, though.

You do not have the required permissions to view the files attached to this post.

I've done tests with a new WD Caviar Blue single platter to see what difference this technique did make. I'm not linking the page here since its an old, dead page of my 2001 site which I quickly put up today but one that has advertising (which I can't see), that might be naggy.

Short stroking didn't offer anything in all of the applications and real workload tests - except with IOMeter. IOMeter did make a performance gain albeit very small. Other benchmarks only showed certain block sizes gaining performance improvements from their odd dropouts, but overall, very few gains in anything. I actually saw some performance losses in some benchmarks.

It becomes apparent that THG selected only two benchmarks that show the most exaggerated and unrealistic gain, and that seems like the exact reason why they left the conventional benchmarks off. This is intentionally misleading. There is practically zilch performance difference. Another myth. Failure.

There is practically zilch performance difference. Another myth. Failure.

If you run the old versions of HDTach with the "average" sequential speed, then you definitely see a performance increase but as I said, it is not real performance that goes up, it is only an average in which the slower parts at the ID of the platters have been excluded. Otherwise, in the long run, there will be more damage to the drive than anything else since the drives are architected and configured (firmware) to the optimal performance in the factory default configuration.

Among other things, that also includes the strength of the actuator (there is a trade off between seek speed and accuracy). The analogy that comes to mind is a weight lifter who, if short stroked, only lifts the bar a couple of inches instead of the entire height. In the beginning, this will increase the number of exercises but in the long run, there will be some conditioning and asymmetric wear that will cause averse effects.

Looking at this short stroking again, it finally dawned on me that there is absolutely no reason to shortstroke a drive to begin with. Partitioning has the same effect, only you don't throw away 90% of the drive as a side product. Just another underscoring of the blatant idiocy of this article.

M_S wrote:Looking at this short stroking again, it finally dawned on me that there is absolutely no reason to shortstroke a drive to begin with. Partitioning has the same effect, only you don't throw away 90% of the drive as a side product. Just another underscoring of the blatant idiocy of this article.

Exactly

When you create a small OS partition at the outer faster sectors and a large data partition on the remaining more inner sectors, its the best you can get from a HDD. Access times for the OS partition will be slightly quicker due to its smaller, outer size in cases when much of the partition is full, so you get slightly quicker speeds only in that scenario, but its very little and only constrained to some tests and nothing really changes overall.

This (short stroking) is actually an extremely commonly believed online "enthusiast" myth. You'll see all sorts of strange "complex looking" explanations given by purported "computer engineering/physics experts" on fora on this topic

IOMeter due to it relying entirely on the IOps metric is one of the only benchmarks to show high gains, since it supposedly relies on data access times to generate its results. Honestly, I am really suspicious of this benchmark - its throwing out very strange and uselessly synthetic MB/s results in many of my tests. Result fluctuations which real world tests do not corroborate. I've found some major bugs with it too which inflate all the results by over 100%.

*Makes note of the only selected benches THG used to show supposed gain; to avoid forever...*

There was an interesting episode at last weeks TG meeting on defining the forward looking storage standards, where Intel showed that IOMeter had probably the lowes relevance for real world usage patterns.