Could anyone here explain to me what is implied by this term? (I've seen the same thing mentioned with the 3 terms).

At first when I read about it, for some reason I understood that it was some way of splitting the bytes across the platters of the disk, which sounded like a good idea and obviously doesn't make sense, because that wouldn't cut disk size in half (and disk are probably already splitting bytes across platters)...

The best I've come to understand is that basically instead of creating one partition for the whole size of the disk, you create 2 partitions, and use only one of them, either the one in the "center" or the one in the "rim" of the platters, and since one of the two is faster (people didn't seem to agree on which one was faster), that makes everything better.

Am I understanding this correctly?
Has anyone tried this with their drives and had a good outcome?

5 Answers
5

Short-stroking is basically what you found. You specifically use only the last few tracks on each platter of your hard disk. I have heard of this, but haven't looked at it in a while.

Looking at new articles, as well as from memory, the details about this are a mixed bag, mainly bad from my perspective.

Extremely reduced capacity for the drive because short-stroking only shows a benefit with very small "drive" sizes.

Roughly 40% better random seek times.

Slightly faster bulk transfer rates.

Requires specialized software that doesn't seem to be widely available.

When used in a RAID 0 array (as most articles recommend) this uses a lot of power for relatively small drive sizes.

I have previously recommended against ideas like this, as just buying larger, faster disks is cheaper in the long run, unless you don't pay for your electricity. The time savings may help in a database server with very little memory, but I can think of no other situation.

In general reading from the outside sectors of the platters is faster, as more sectors pass under the heads per second at 7500 RPM (or whatever) than towards the middle. Also, the heads rest on the outside of the drive when resting so, making a partition only near the center of the drive could actually give you worse seek speeds.

Today it is a hardly sensible approach - it was very important some years ago, but today... just get an SSD. If you short stroke you do that bcause you need IOPS more than speed, and SSD are totally killing hard discs. Double your IOP with SSD - good. 450 -> 900. SSD = 50.000+ ;) Ouch.
–
TomTomMar 29 '12 at 6:03

@notfed You mus be delusional. Today if you need to buy IOPS then SSD are enything but cheap. I am just getting 960gb Samsung 843T for a price that is not a lot higher than that of a the same capacity in 2 x 400gb SAS drives and the IOPS side is so brutally faster that anyone buying a HD when he needs speed needs one thing: Being fired.
–
TomTomJul 21 '14 at 8:10

@TomTom: In 2014 it's easy to find a 1TB HDD for $100. Please tell me where you are finding a 960Gb SSD for "not a lot higher"
–
Jay SullivanJul 21 '14 at 20:12

I can get two 15k SAS spinning hard drives for less than half what the 960GB Samsung 843T costs. I have a hard time believing the SSD is faster than short-stroking four 15-k drives to a higher capacity.
–
Joshua NurczykJul 22 '14 at 16:27

In the beginning each "track" of a disk platter had the same amount of 512 byte sectors - meaning the density was highest towards the center. This was as I understand it pretty early on improved by making each track have a variable amount of sectors to increase efficiency and have about the same density over the entire platter (ZBR).

Hence, the further out on the platter data lies, the faster it can be read and written to as the raw throughput will be higher.

So yes, partitioning only the outside half of a disk would definitely increase overall performance.

Is it worth it? No idea. It's usually employed for high-end 15krpm drives in critical environments. Today I'd say the modern drive controllers in these situations can handle this intelligently enough without specifically "short-stroking" drives.

I'd be curious to know if this is used "at factory" as well, like producing smaller-sized high-speed drives with perhaps more than one platter that only internally uses the most outer tracks to get a performance advantage?

Short stroking and Partitioning are 2 very different things.
When you Partition a 1 TB drive say into 2 500GB partitions that means that every track on every platter has the ability to be written to by the Operating System. albeit in 2 different "buckets"

Short Stroking is only using the outer 1/3rdish part of each platter in the drive in total. That's it. so on a 1TB drive you would theoretically have 300GB or so of usable capacity.

Short Stroking a single drive on a PC serves no purpose other than to only use a portion of what you paid for in reality.

The reason to short stroke is for performance purposes. Every drive weather SATA, FC, SAS has a limited number of operations it can perform called IOPS (Input/Output Operations Per second). IOPs is like a highway. If I have a 1 lane road I can expected x amount of cars/hour before we see traffic (higher response time), however If I have a 3 lane highway I can now handle an increased amount of work and have less traffic. Notice I did not say 3x the amount of work because the workload does not increase lineally. Also to complete the analogy if the number of lanes is the IOPS, then the amount of cars I can put through those lanes would be the throughput.

A 7200 RPM drive I can expect ~ 75 iops, for a 15K rpm SAS drive I can expect ~ 175 iops. Now that is just the "Average" iops expected per disk. Block size, sequential vs random access, and response time all come into play as well, but for the sake of simplicity let's just limit the conversation to IOPS.

If I create a RAID group of 3 15K disks, they then act in unison and, again, just keeping it simple, I now have a disk "unit" capable of performing 525 IOPS (175x3).
Now If I only write to 1/3 of those 3 disks platters, because my goals was to have higher iops capability, I have achieved that but at the cost of useable capacity.

So extrapolating that out to high end storage arrays like NetApp, EMC, IBM and others, short stroking is a performance enhancement technique that allows for higher IOPS, and lower response time (because I am only writing to the fastest part of the internal disks) and now my storage arrays are capable of reading/writing data very fast.