Posted
by
timothy
on Thursday June 30, 2011 @07:42PM
from the spit-it-out dept.

Blittzed writes "We were reminiscing about the good old days of overclocking CPUs and memory, and the subject of hard drive overcloking came up. The discussion / argument we were having in the research lab ended up in a bet which now has to be settled. So, we are putting our money where our mouth is, and putting up $10,000 to anyone who can read a 500GB drive in under an hour. We will also consider other attempts with a smaller amount of money in the event that the one hour is not possible. There are a few rules (e.g. the drive still needs to work afterwards), but otherwise nothing is ruled out. Specific details can be found on the URL. Go let the white smoke out!"

I'm failing to grok this. My two year old Velociraptor can sustain something close to 138MB/s transfer with no tweaking (the speed needed to read 500GB in an hour).

Is there really no enterprise-level drive that can manage this...?

I'm hazarding a guess here, but I suspect that sustained transfer rate is for contiguous data - and even then it sounds a little high. As soon as you have to move the head to read the data (because large contiguous reads are really rare), you can expect to see the sustained rate plummet like a suicidal lemming.

Because some of us work with multi-TB scratch data on our workstations, and it it would be really nice not to have use do disk arrays to approach even one gigabyte per second, especially when even low-end memory and CPU busses can handle data at several times that rate.

But then you should be dealing with SSDs or Raptors. In the end it doesn't change the fact that unlike CPUs and GPUs and RAM where you are just pushing electrons through silicon with a HDD you are dealing with a mechanical device that has been built and testing for a certain rotational speed with a certain MTBF and by pushing that you are in essence redlining your car and hoping the engine doesn't blow.

So it doesn't make any sense. With GPU, CPU and RAM you can use better cooling to drop the temp and allow

That is only if you use MLC which frankly as I linked to above is kinda shit ATM. Whereas if you use enterprise quality SLC (more expensive but if you are really running machines pumping so much IOPS that they need MultiTb scratch then you should have the $$$) have a MTBF if you write full blast of 5-7 years, with some rating 10+. And as another pointed out there are 15k SAS drives out there already as well.

So in the end this simply makes NO sense at all. Everything you OC is electrons through silicon, whic

I have to say, I have an utterly different experience with SSDs, perhaps he should stop buying MLC and stick with SLC? SLC has 10x the life expectency, 10x the speed, but 1/10 the storage (and 10x the price...)

An hour!? I have a 500GB drive on my desk and I can read it in under a minute! The first line says: "Seagate Barracua 7200.11 500 Gbytes" The entire label has only a few dozen words and serial numbers.

I have 4 18GB 10krpm Seagate Cheetah U160 SCSI drives I bought in 2000, and which have been run 24x7 virtually ever since (other than brief down times for maintenance, etc)..

That's roughly 87000 hours of run time on all 4 of them, with no failures. I retired them this past March along with the RAID controller they've been married to all this time. I retired them not due to failure (though the bearings in one sounded like the end was drawing near), but because I needed more storage in the server.

I've got an old Seagate 2.1Gb SCSI Barracuda that's been running since the 1950's.

Now, that's impressive. Presumably a secret project that IBM stole for their first model, which was introduced in 1956. But IBM's only had fifty 24 inch platters, with a total capacity of 5MB, and it needed 3-phase power and a forklift to move it. Yours is a lot bigger. But is it faster than IBM's (whose access time was close to 1000 ms)?

The problem is not the old disks. Actually, the older, the more reliable. It's the newest disks that are the worst. When you boast "My disk is running fine for 5 years already" you're talking about a disk from 5 years ago. And it's the disks from 2 years ago that keep dying on us. Tollerances get

Not really as during the life of the disk it will remap failing sectors to some spare unused blocks that are kept specially for that purpose. Once it runs low on spare blocks it will generate a SMART warning, and when it runs out you are screwed. The more hours the drive has on it the fewer spare blocks it is likely to have left.

The surface, the bad blocks is not really the problem here. Sure it degrades and starts slowing down, and eventually bad blocks may happen. But far sooner the disk motor bearing will die from constant vibrations, the head mechanism bearing will fail, the seals will leak moisture inside (and the dessicant bag will reach its capacity), the lubricant supply for the bearings will run dry, the "emergency parking" mechanism of the head will get stuck, capacitors will die on the PCB, and so on...

anyway, tollerances get smaller, meaning less room for error, wear and smaller wear causes faults. Disk that had 2 years warranty was built so that it could work for 18 years +-15 years. Now a disk with 2 years warranty will work 3 years +- 6 months...

We got hundreds of 500Gb & 750Gb Barracudas online. Annual failure rate is about 4%, but did peak at 7.38% at one point.
Larger seagates are the worst drives ever, starting from 1Tb. Their sustained maximum contiguous read spead is about 22M/s only, if you are lucky! (We've tried ES.2 and Constellation only if i recall right, both high end drives, Constellation meant for enterprise only, and ES.2 if i recall right was basicly 'cuda meant for RAID required envs like video editing, for which their sustai

You won't be able to push any more than 18 gigabytes in a minute through SATA-II and that's in theory. So theoretically one could read a 500 GB drive in ~28 minutes, but the drives just aren't nowhere near as fast. Then again, maybe your Barracua is many fold faster than Barracudas. I know my Sonny cassette player was faster than that from Sony.

You won't be able to push any more than 18 gigabytes in a minute through SATA-II and that's in theory. So theoretically one could read a 500 GB drive in ~28 minutes, but the drives just aren't nowhere near as fast. Then again, maybe your Barracua is many fold faster than Barracudas. I know my Sonny cassette player was faster than that from Sony.

You should try Sany. Way faster than even Sonny. The only problem I had with it is it would only read the cassette once and then you need a new player... and a new cassette.

The SAS 6G and SATA 3 (6Gbps) models of SSD go up to over 500GB now. Reading that in a few minutes is no big deal. Even the SATA II Intel 320 series [intel.com] does 600GB and sequential reads at 270 MB/s, which would be 600GB in (600000/270 seconds) - 2222 seconds or just over 37 minutes. My laptop has a better data rate, but I use off-brand components:-). This is no problem at all.

A spinning rust platter isn't ever going to dish that, but if this is a job you need done and you're willing to spend ten grand, I'll

That's rather limiting. There are PCIe attached solutions that consistently read/write at more than 6GB/s rather than 6Gb/s - like for example the ioDrive Octal [fusionio.com]. It can have far more storage than your limit - ten times as much on one card. That thing has a serious 48Gbps serial read bandwidth, sustained, and you can configure many PCs with eight or sixteen of them. This is only one of many. There are actually some applications that strain against the limitation of this bandwidth.

The SAS 6G and SATA 3 (6Gbps) models of SSD go up to over 500GB now. Reading that in a few minutes is no big deal. Even the SATA II Intel 320 series [intel.com] does 600GB and sequential reads at 270 MB/s, which would be 600GB in (600000/270 seconds) - 2222 seconds or just over 37 minutes. My laptop has a better data rate, but I use off-brand components:-). This is no problem at all.

You have to use Western Digital Caviar Black 3.5" SATA 500GB hard drive (WD5002AALX).

Oh, but did the rules state the data read needs to be the same as the data written? Error free?Just grab the data from the cache without worry if the cache got to be filled correctly and enjoy superior read speed!

I don't get it. 500GB in an hour would be about 140MB per second (yes, I am rounding up). Most of the enterprise level 15K drives are right in that range without any overclocking, with a couple well above that. Do I win ten grand for buying a Seagate Cheetah 15K.7 for $450 and bringing it in to show that it works?

It's about 132 MB/s actually - remember, it's multiples of 1000, not 1024 and then some space is used by the file system.

Anyway, it's not clear what they want just from the description here on Slashdot. Read the labels of the drive? But seriously, one could get a 2 TB drive or whatever drive has the most density these days and make it show up as 500GB drive... I believe it's called http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html [tomshardware.com]

1. As used for storage capacity, one megabyte (MB) = one million bytes, one gigabyte (GB) = one billion bytes, and one terabyte (TB) = one trillion bytes.

So 500,107 MB = 500.107 GB

"Formatted capacity" has nothing to do with file system formatting; it refers to the host-accessible storage capacity of the drive, which is 976,773,168 sectors (also from that same document). The contest is to read all those sectors in under an hour. Sectors are 512 bytes each, so you need to read 500,10

Is that 15k RPM drive a "Western Digital Caviar Black 3.5" SATA 500GB hard drive (WD5002AALX)." It's stated pretty clearly in the rules that it needs to be that model. I don't think they're going for a speed test here, because there are plenty of SSDs that blow that speed away. They're trying to take a "normal" drive and super-speed it, for forensic purposes.

So, upping the RPM obviously, but there must be various actuator settings in firmware that could be tweaked - safeguards, gain settings? What are the possibilities? I've never seen this done or even talked about, most people are "afraid" of hard drives, amazed they even work at all.

Well overclockers take a "cheap" low end processor (especially back in the C2D days, take a "Pentium Dual Core" and crank it up, get 99% of the performance of a C2D EE chip many times the cost. Why not figure out how to take a "cheap" 5400 RPM drive and figure out how to crank it up.

Chip makers are known to sandbag their chips especially after a design is mature. They detune chips and sell them as low performance pieces to fill the market, and the performance is there for the taking with no real risk to t

Western Digital no longer publishes the internal organization for their drives but 126 MB/s over 500 GB yields about 1 hour and 6 minutes to read the entire drive in the best case. It is proportionally longer of course for larger drives since only one head can be read at a time and head switches require at least the same amount of time as a adjacent track seek.

Without physically raising the spindle speed, I do not believe it will be possible to lower the time to read the entire drive significantly. The sp

They specified a brand and model that had to be used (e.g. the net result would be an X% increase in speed).. I think they also limited the hardware modifications you can do.
So this is a test to make a hard drive 'over clock' and I believe they mean it in the sense like we do for CPU and memory -- Software/voltage/etc.. More cooling would be okay, but not disassembly of the hard drive

You could replace the drive firmware with a hacked one that changes error detection behavior,
changes the way the buffer/cache is used to optimize the drive for the contest's access pattern,
or kills any power saving features.

The other thing would be changing characteristics of the drive's mounting to reduce
vibration to insanely perfect vibration dampening for maximal mechanical performance.

In all modern IDE/SATA drives, the firmware is stored on the plater, not in an eeprom. And for most manufacturers, it's not field accessable. Plus there's zero documentation for the firmware / internal processor(s) outside of the manufacturer's labs. (and maybe the company making the chips.) Hacking the firmware is beyond the reach of anyone who would be wowed by a $10k prize.

In all modern IDE/SATA drives, the firmware is stored on the plater, not in an eeprom. And for most manufacturers, it's not field accessable. Plus there's zero documentation for the firmware / internal processor(s) outside of the manufacturer's labs. (and maybe the company making the chips.) Hacking the firmware is beyond the reach of anyone who would be wowed by a $10k prize.

Its not that mysterious. People mod DVDrom firmwares every day. HDD is just a DVDrom with magnetic media:)+ firmware IS "field accessible". Every HDD on the market can have firmware updated by end user.

Does it have to be a spinning platter drive? If not, some of the PCI-E SSDs can get over 1GB/s sequential reads which would easily put a 500GB read at under 10 minutes. Of course, you'd likely have to spend at least half of the $10k prize on the drive itself.

This is an attempt by a forensic company to crowd-source the development of a product on the cheap. I you can do this, you can make a fortune selling to the different LEAs around the world. But please don't do it, we do not need more efficient spooks.

This is another hidden benefit of Apple hardware that people don't readily consider.

Apple hardware is very hard to get in and out quickly, covertly, and without a few red flags being noticeable.

A couple of years (4+ now) ago when I sat in with Apple's instructor led hardware certification labs there were a small team of high tech crime investigators for the Australian Federal Police, and Australian Attorney Generals department attending.

They weren't interesting in passing the test, they had absolutely *no*

Do your reading in a negative-gravity environment, so time has negative dilation and the data can be read at what seems to be a higher speed to an outside observer. Achieving a faster time frame is left as an exercise for the overclocker.

Since I can get to it (after a long delay), perhaps I would just post TFA itself:

Overclocking Competition

CPU overclocking is old school, and GPU overclocking isn't much newer. Memory overclocking? Been there done that. For all of you hardware modders looking for something else to let the white smoke out of, have we got a challenge for you! Hard drive overclocking! Why do you want to do this? Because you can! And, in these days of really big hard drives, getting data off the things can take a long ti

Exactly -- I always wondered why this was not done -- is it a limitation of the form factor? Why not have two arms? We already use multiple heads, multiple platters. Seems like you could double the performance or at least allow a minimal cost error checking (single disk-level mirroring?) with such a solution.

Dunno what you're talking about. Disks used to have multiple r/w arms. They also used to be the size of your desk. Putting another arm in the housing would only work if it was on the opposite side from the one that's there, but now your housing is 4 cm longer, and you've got extra wire causing latency and skew problems.

Head preamps are usually somewhere on the arm assembly, and they drive controlled impedance differential pairs, so an extra inch or two shouldn't be that big of a deal. Latency is not an issue at all, each arm would be controlled separately and they don't need to be synchronous at all.

If you look at it the right way (translation: I'm about to break a rule) it's done all the time. It's called RAID0.

But seriously, that tells you why it's not done: because if your really care about performance that much, you can get more performance than a multi-head-set drive and spend less money by using commodity parts. If you make a drive that works this way, no one will buy it. (Except for money laundering purposes.;-)

From the practical aspect: My U-Verse DVR had a not-very-special 2.5" drive in it, and was able to record four things at once while replaying a fifth in my not-special configuration at home. (I believe it can actually do more than that with multiple receivers networked to it, but I just had the single DVR box.)

That said, in the interest of pedantry: Unlike a striped RAID 0, a RAID 1 array of n+1 disks could conceivably perform as a single disk with multiple heads, since a RAID 1 of n+1 has n+1 worth of in

Say you have a video recording application where you're writing a video stream to disk, and that (perhaps uncompressed?) stream is of such ungodly bandwidth as to take a significant chunk of your drive's throughput. One head's fine if your disk isn't ridiculously fragmented (which it won't be); you have RAM to buffer it while the drive seeks occasionally (e.g. past a file fragment to the next unallocated space), then it'll catch up. But now suppose you want to playback a timeshifted stream of this same bandwidth

That said, in the interest of pedantry: Unlike a striped RAID 0, a RAID 1 array of n+1 disks could conceivably perform as a single disk with multiple heads, since a RAID 1 of n+1 has n+1 worth of independent head stacks, all reading identical data. (Also in the interest of pedantry, n is 1 or greater, since otherwise it is perfectly possible to create a RAID 1 consisting of a single disk with none of this potential, even though it is neither redundant nor an array.)

Except that that's not true in the case that the grandparent post described where you have a stream bring written at close to the maximum physical transfer rate of the disk, then you want to read that same stream from the beginning while you continue to write to it.

With two heads on the same disk, this should be possible, one head is busy writing, the other head is busy reading, with little seeking going on for either head and the full bandwidth of each head is available for each of the concurrent streams.

I'm afraid the limitation is cost, few people are willing to pay twice the price for same capacity...on obsolete technology. There were few CD-ROM drives that used multiple lasers, then DVD came in and the projects didn't return their own cost. So far the bus was always not fast enough to guarantee doubling the speed of the fastest drives.If you want faster HDD, get SSD.

And as for home mods, 1) the precisions involved are out of reach of any non-professional, 2) just think about writing the firmware to run

I have a better idea, one track, no more random access. Like an LP record. Drop the head at the beginning and let it read a spiral inward. Have two 'side tracks' read by sub-heads on the same head chip as formatting to keep the main head in the groove. 100% ECC blocks. 2tb drive -> 500gb drive at same density. Now spin the drive at 50,000rpm and drop the needle and get back your 500GB in about five minutes.

You can almost certainly overvolt an electric motor, unless it's already at it's peak RPM rating. Especially true with brushless.

I would bet the motors used in HDDs would run fine for years even at 36V, assuming they are 12V. Thus tripling the RPM rating. Just avoid stopping & starting the platters often (highest peak of power used). Question is can platters do it. HDDs are meant to work for years upon years, so they should be working at rather low end of their potential capability, in terms of wear