Seagate hits 1 terabit per square inch, 60TB hard drives on their way

Seagate has demonstrated the first terabit-per-square-inch hard drive, almost doubling the areal density found in modern hard drives. Initially this will result in 6TB 3.5-inch desktop drives and 2TB 2.5-inch laptop drives, but eventually Seagate is promising up to 60TB and 20TB respectively.

To achieve such a huge leap in density, Seagate had to use a technology called heat-assisted magnetic recording (HAMR). Basically, the main issue that governs hard drive density is the size of each magnetic “bit.” These can only be made so small until the magnetism of nearby bits affects them. With HAMR, “high density” magnetic compounds that can withstand further miniaturization are used. The only problem is that these materials, such as iron platinum alloy or a sprinkling of table salt (really), are more stubborn when it comes to changing their magnetism (i.e. writing data) — but if you heat it first, that problem goes away.

HAMR, which was originally demonstrated by Fujitsu in 2006, adds a laser to the hard drive head. The head seeks as normal, but whenever it wants to write data the laser turns on (pictured below). Reading data is done in the conventional way. Just so you understand how small the magnetic bits are in a HAMR drive, one terabit per square inch equates to two million bits per linear inch; in other words, each site is just 12.7 nanometers long — or about a dozen atoms.

In theory, HAMR should allow for areal densities up to 10 terabits per square inch (magnetic bits just 1nm long!), and thus desktop hard drives in the 60TB range. Meanwhile, conventional perpendicular recording is expected to hit one terabit in the next few years, but the roadmap to greater densities isn’t very clear. There is no word on the cost of HAMR drives, or whether the addition of a laser will significantly impact power consumption or I/O performance.

The biggest winner from larger hard drives, of course, is cloud storage and computing — but then again, the other angle is that you’ll have so much local storage that the cloud seems a bit pointless, especially when we all have 100Mbps internet connections. But then again, with the unstoppable surge of smartphones and tablets and flash memory, do mechanical hard drives really have a future in consumer electronics?

Tagged In

As capacities increase so do the frequency of disk errors. One way to overcome this issue is with file systems like ZFS.

Anonymous

I am not sure about that. Do you rember the Deskstars, that failed in masses around the millenium?

And also Seagate was notoriously unreliable a few years ago, but you don’t read much about that anymore.

Also I read that some of the early MFM-drives weher very unreliable, like th HDD that was included in the IBM XT.

andrewi

In the most literal sense, 5-10% of all consumer hard drives have firmware errors that would cause them to mis-read or write maybe 0.01% of the time. Now NTFS and HFS+ etc have metadata that helps them repair their hard drives but that becomes a bit ineffective when you have a 60TB HDD and its going to take you 3 weeks… much more importantly, it doesn’t fix anything that your metadata (info about you data also stored on disk) missed.

ZFS is a completely different way of storing information, but one of the really interesting things that this will bring forward is the fact that ZFS can store parity on the same disk, basically allowing your HDD to RAID with itself by dividing your capacity in two, or three. or four etc…

With 60TB on tap, a single drive could literally be impossible to corrupt.

wsxedc

Actually when I wrote “I am not sure about that” I was referring to the believe, that the frequency of disk errors increases with capacity.

http://michaelahale.com/ mikehale

I should not have used a generic term like “errors”, when I was thinking about a mis-read/write. It is possible for the drive to continue to operate, but to silently corrupt small portions of data. It is after all only a physical device :)

wsxedc

In the last years, the amount of data on HDDs increased, but also the bit error rate decreased and the average amount of read/write errors per HDD and year stayed about the same. Fatal disk errors (i.E. errors, which make the whole disk unreadable) deacreased imho.

The frequency of errors related to physics does increase with capacity, you are right. But the frequency of double-bit errors, or errors which will simultaneously corrupt any one file AND it’s parity data… well that’s damn near impossible.

Let me clarify. Parity data is like information about information. This allows your drive to repair itself as if it has a backup in the event of data corruption, even the ‘silent corruption’ or ‘bit rot’ that nearly all Filesystems today won’t pick up.

wsxedc

It is true that the bit-error rate is reduced dramatically if you use ECC, according on English Wikipedia it is reduced by a factor of about 10^7 for ECC DRAM compared with regular DRAM. I don’t know how much impact it has on HDDs (also every HDD has ECC). However the number of bit errors is still significant for some applications.

And of course there is the possibility that the disk in total fails.

andrewi

ECC is for transmission, not storage. DRAM benefits because it is self refreshing (and thus is always transmitting to itself), but once the data is on a HDD, there is no error checking that can detect if a neutron hits a bit and flips it without parity data. ECC checksums/metadata is not (and never will be) stored. It is simply not the purpose of the tech.

So when data is read off the disk, it will get to the rest of the PC without being corrupted thanks to ECC, but if it is already corrupted on disk, then you need ZFS or parity data via RAID or a backup comparison to rectify that.

wsxedc

HDDs also have some kind of parity data. It is sometimes called ECC, I don’t know if this is correct, but I am sure that they have parity data stored on the disk.

andrewi

Parity data is stored in filesystems, and differs from filesystem to filesystem. FAT(16 and 32) and ExFAT have no parity data. NTFS and EXT have a lot, HFS has very little (mostly just parity of file update history aka Journalling, which is weak on it’s own). Unfortunately, none of these have anywhere near enough for modern HDD capacities. NTFS for example, was last majorly updated in it’s 3.0 variant, in 1999. The best HDD money could buy was only 20GB. Seagate just released a 10TB monster. 500x the space means that the sizes of files and filesystems NTFS was built to recover are now hopelessly small compared to current partitions and drives are orders of magnitude more bit rot prone and imo less reliable. Parity data actually on the HDD would come in the form of RAID data and maybe cache. That’s about it.

ZFS however is simple in it’s effectiveness. It has parity on every single level to a 1:1 recovery standard up to root, so effectively a copy of everything exists on the drive. You pay with the filesystems loss in capacity, but a 60TB HDD fixes that wonderfully.

wsxedc

Actually, all modern HDDs create parity data on their own, independently from the file system. This parity data is not visible to the user or the operating system, because it is created at firmware level. Actually this is the reason why HDD manufacturers use 4 KB sectors (usually with 512 byte emulation) even on HDDs with less than 2 TB: with 4 KB sectors it is possible to archive the same amount of parity with less overhead. Actually this is the reason why the S.M.A.R.T. attribute 187 is called “Reported Uncorrectable Errors”, because it only counts the errors that are not correctable by ECC. Some HDDs also contain the attribute 195 “Hardware ECC Recovered”.

David Lean

“impossible to corrupt”. Yer right. Our Data center has a lot of drives. We work closely with people who excel in recovering data in a clean room. A head crash is a very common way for a drive to fail. When it does little fragments of disk bounce all over the disk platter. Obliterating every track.

1. You should plan to lose the entire disk. Total destruction “Corruption” is very possible, regardless of the file system you use.
2. Twice in the past 12 months, I’ve seen both disks in a RAID mirrored pair die within 3 weeks of each other. I’m not clear if it because they were both purchased at the same time & likely to be from the same manufacturing batch.
OR
Some vibration or movement jarred them both. Unlike 2.5″ drives the 3.5″ usually do not protect themselves against movement when they are running. (Some SAN optimised drives do have some protection against vibration)

andrewi

Okay, more specifically, impossible to corrupt due to bit rot. We are obviously discounting physical issues here. No software will fix that now, likely forever.

60 Tb Internal is pretty big. Mind you I said that about 1.44 Mb floppies.

http://www.mrseb.co.uk Sebastian Anthony

TB and MB! Small b denotes ‘bits’.

(One of my pet peeves. Don’t even get me started on people who use MBps…)

Anonymous

I’d prefer it if they were counted in nibbles – but alas!

Anonymous

I’d prefer it if they were counted in nibbles – but alas!

Anonymous

By convention, capital letters for units are reserved for those named after people. V for volt, Hz for hertz. Small letters are used for non proper names, such as m for meter. B is used for Bel, after Alexander Bell, and designates the logarithm of a power or voltage ratio in electronics. A small b should be used for bytes, and as for bits, spell bits out, it is simpler.

http://www.mrseb.co.uk Sebastian Anthony

That sounds very olde worlde. In modern, high-technology, B is the convention for bytes, b is the convention for bits.

Didn’t know about ‘Bel’ though — thanks for commenting!

Anonymous

Not sure what you mean by modern, high-tech. Electrical Engineers use decibels, dB, everyday, especially in radio frequency engineering or circuit design. I see B used as an abbreviation for byte all the time, but usually as kB, MB, GB, rarely alone. In this context confusing MB with dB is likely not an issue.

http://www.mrseb.co.uk Sebastian Anthony

Ahhh! deciBEL — ha, I never realised. Cool.

Sander Kamp

Megabel and decibytes :P

Anonymous

This might be true for SI-units, but bit and byte are non-SI units. SI covers only physical units, not information units.

brunnegd

Units are units, regardless of usage.

wsxedc

Yes, but bits and bytes are not SI units. So they don’t have to follow SI conventions and also can use the same letters. There is also no scenario where Bytes and Bel can be confused as I am aware of.

Saut Daniel Goeltom

that needs more over more other larger hardwares system capability adaptations, just like hyper cache-memory, super front side bus in mobo+mem+procie, etc….!!!

Well, it’s not really about the size of atoms — but more about the bonds between them. It’s about 0.5nm between atoms of silicon, for example.

(Unless I have it completely wrong — in which case, please let me know.)

Madis Lõhmus

Well, monocrystalline silicon has diamond cubic structure, and yes the lattice spacing is ~0.5 nm, but I don’t think this means the distance between atoms is 0.5 nm. 0.5 nm is measured between the two outermost atoms in the unit cell and the cell contains more than just 2 atoms.
Also I don’t know the structure of the iron platinum alloy (if this is the material used in the hard-drive) but similar train of thought should apply here.
Anyway, maybe I’m too used to the notation that 0.1 nm is comparable to the distance between atoms (and maybe wrongly so), but it just seemed wrong that 12 atoms put together will span 12 nm. The true value is probably somewhere between 0.1 and 1 nm.

http://www.mrseb.co.uk Sebastian Anthony

Cool — thanks for the input :)

Anonymous

0.1 nm and 1 nm is both unrealistic for solid an liquid matter, solid materials usually have an average atomic distance of about 0.2 nm to 0.3 nm, it is actually not that difficult to calculate if you know the molar volume: ((Molar Volume im m³/mol)/(6.022E23*Number of Atoms per molecule)^(1/3). 6.022E23 is the Avogadro constant. Silicon has a molar volume of 12.06E-6 m³ and the average distance between atoms is about 0.27 nm. If you don’t know the molar volume of some material, you can get it by molar mass/density. E.g. water has a molar mass of 0.0180153 kg/mol (which is normally written as 18.0153 g/mol).A few examples:

I could continue this forever, but from what I know all solid materials have an average atomic distance in between 0.2 nm and 0.3 nm and the correct number of atoms would be somewhere between 42 and 64.

http://www.mrseb.co.uk Sebastian Anthony

Well, awesome — thanks for that :)

I should’ve actually known about the calculation from molar volume — I did chemistry at high school…

http://www.facebook.com/people/Boggle-Smith/1321171114 Boggle Smith

Also, shouldn’t 1 terabit per square inch equate to 1 million bits per linear inch, or am I making some sort of schoolboy error here?

andrewi

Approximately.

It would be 1048808.84817 precisely.

http://screenlight.tv/ Marc Tremblay

I wonder if one of the cloud storage providers will eventually buy one of the storage suppliers. Not sure how much of a consumer market will remain as consumer data is increasingly stored in the cloud.

rickcain2320

What the home user really needs is a way to back that data up to tape. Have you priced high density tape back up systems?!?

http://twitter.com/patricklaughner Patrick Laughner

Awesome! Only one problem. ISPs are systematically killing cloud computing.

Anonymous

Yay! Any chance we’ll see the platters spin faster than 7200 rpm or 10k? It currently takes about 18 hours to run ddrescue on a 1 TB drive with no errors, granted there are several rereads per write. Simple scans like antivirus/malware scans are all day affairs on nearly full 500 GB 5400 rpm drives. Hard drives don’t need to get bigger. Seagate really needs to dedicate its r&d to commercialization of holographic storage.

Anonymous

Yay! Any chance we’ll see the platters spin faster than 7200 rpm or 10k? It currently takes about 18 hours to run ddrescue on a 1 TB drive with no errors, granted there are several rereads per write. Simple scans like antivirus/malware scans are all day affairs on nearly full 500 GB 5400 rpm drives. Hard drives don’t need to get bigger. Seagate really needs to dedicate its r&d to commercialization of holographic storage.

Perhaps the solution is to get more heads on the platters. It might even make sense since the filing of the structure/location of the file is seperate from the file itself anyway. So it’s presently requiring the head to keep swapping back and forth between at least those areas just to read or write any single file.

Anonymous

Personally, I’d love a few of these to intelligently archive and version my solid-state devices as well as serve as a central location for media that I’ll pull down these 100mbit pipes.

Anonymous

Thats pretty sick dude, I can hardly wait. I mean like wow.
Total-Privacy dot US

Anonymous

The cloud has always seemed pointless. A security breach waiting to happen.

Robert Birnie

At current drive speeds it’d take what, a week to write to it? We don’t need any more capacity we need faster IO. For servers it’d be a million times more valuable to have 60 1TB drives than 1 60 TB drive. Even if you are running a SSD front end, it’d take so long to sync to a 60 TB disk that it wouldn’t be worth it.

http://nicoburns.com/ Nico Burns

Higher data density in hard drives tends to mean faster read/write speeds, as well as more capacity :) (because if it spins at the same speed and the data is closer together, then it can read/write to it quicker)

andrewi

Theoretically, if we can get 50-60MB/s on a 1TB today, then a 60TB with no bottlenecks (in the other hardware like controllers etc) can pull about 3.2GB/s sequential read. Even if the laser makes it half as fast at writing it will run rings around SSDs easily. Random reads however will be somewhat unaffected and that may mean SSD’s become a very specialist niche.

Imran Comot

It’s going to be a problem at spiral write right…

Robert Burnham

My Mini-HTPC is ready and waiting for a 2TB laptop-sized HDD. Seagate, shut up and take my money!

Anonymous

about flash memory: I doubt if it will replace HDDs, it seems like flash has reached a stand-still, I didn’t observe any noticeable changes in density and price for SSDs and flash memory in general since at least one year. The problem with flash is imho that you need many transistors of a special type to store one bit and theese transistors where shrinking muuch faster than Moore’s Law until they reached something like 19 nm about 1 year ago and since then this didn’t change, maybe flash already reached a physical limit. See http://en.wikipedia.org/wiki/Flash_memory#Flash_scalability

http://pulse.yahoo.com/_PJW34OIGTAA6X6V7LSAQSJQU3I LMF

“or whether the addition of a laser will significantly impact power consumption”I find it odd that people are concerned with the power consumption of a hard drive (which is in the range of 2W for laptops and 10W for desktops), when the CPU, graphics card and display consume 30 to 130W each (depending on how high-end they are). Bottom line – the power consumption of the hard disk is too insignificant to make a noticeable difference.For instance, my laptop has a 50Wh battery and runs for 3 hours (that’s 16.66W average). Turn off the 2 watt hard disk (to lower consumption to 14.66W) and battery life is extended to just 50/14.66 = 3.41 hours, or an extra .41 hours (24.6 minutes). Not much.So insignificant in fact that earlier this year one of the major HDD companies (not sure if it was Seagate or WD) announced that it was no longer making “green” (low power consumption) drives because it just wasn’t worth it.

Anonymous

Seagte referred only to the dektop market in this announcement, Laptop-HDDs use less poer in general and the difference between 204.6 minutes and 180 minutes is a 13.7% incrrease, which is significant. And seagate still makes 5400 rpm laptop drives, see http://geizhals.at/eu/?cat=hd2s7&xf=957_Seagate

But I don’t think, that the laser will use enough power to make a significant impact, because the area it has to het is quite low and it’s not heated as much as the surface of an optical disc when burning as far as I know, so there is no powerful laser needed. I guess a few mW would be enough, so even considering that the efficiency of a diode laser is about 25%-50%, I don’t think it will be significant.

http://www.sfbayarealowcostdatarecovery.com/ Data Recovery

While the capacity is a great thing it’s also a little scary coming from the data recovery industry because newer higher capacity drives tend to be a lot less forgiving and more difficult to recover when they suffer a physical failure. Backing up and keeping the data in two different locations will be even more important than ever.

http://www.sfbayarealowcostdatarecovery.com/ Data Recovery

While the capacity is a great thing it’s also a little scary coming from the data recovery industry because newer higher capacity drives tend to be a lot less forgiving and more difficult to recover when they suffer a physical failure. Backing up and keeping the data in two different locations will be even more important than ever.

http://www.sfbayarealowcostdatarecovery.com/ Data Recovery

While the capacity is a great thing it’s also a little scary coming from the data recovery industry because newer higher capacity drives tend to be a lot less forgiving and more difficult to recover when they suffer a physical failure. Backing up and keeping the data in two different locations will be even more important than ever.

Jeremy Rosenblad

Pfft. Pulls out a Quantum Bigfoot from his pocket and blows the dust off and blows the 3.5 inch out of the water in storage capacity. Just think ow what a MODERN 5.25 could do with today’s tech… Between ~91 and 99 the QBF went from 1.2 gigabytes to a whopping 19.2. yah it looks like it is not a lot but at the time it was. I am still trying to find the time to run the numbers and calc just how much you could hold on a “new” one today…

valentyn0

The future technology for storing tech (the end of hdd and ssd) will be crystal based storage ! CPU’s already profit from that already ( universities only) !

William Libbrecht

You work in the department of redundancy department, don’t you?

http://www.facebook.com/profile.php?id=100000541294048 Paul Proctor

If only they could do this in 3d… 125000tb data cubes anyone? :)

Harry_Wild

It about time! Smartphones needs this badly. 60TB is just perfect size for my new smartphone. With this size; I don’t have to worry about having any micro SD card at all! It the perfect size for a smartphone.

Lewis Mason-Hathaway Powers

really hope your jokeing . still faffing around at 128GB at best .

Taeil

I guess there’s never enough room for porn.

abcdefgqwerty

Its nice but hard drives seem to be pretty unreliable as it is.

kiss peter

Oh really ?
The cheap ones have 5 year lifespan under heavy use , i can confirm that .
I red that ssd’s are generally less reliable than hdd’s.
micro sd has low capacity and low speeds ,
So tell me on what yould you like to store your data ? on high density magetic tapes – but theese are not commercialized , and unavailible for 99.999% of consumers ,
and on what else can you store (large amounts of data) ? – pubes? with heat assisted pube recording??

Matt

who would actually need a 60TB hard drive 60TB is around 60000GB thats alot to use

http://www.data-medics.com Data Recovery Tec

I think the first picture of this article is the most telling thing about where Seagate is headed. The drive read write head is bent sideways and there is a deep gouge on the platter. LOL.

That’s the true reality of Seagate quality alright.

Fabrice

Given that SSDs have now reached 6TB for 2.5-inch drives and are expected to double their capacity every 12 to 18 months, isn’t it pointless to spend all that research on laser HDDs that will soon be outpaced anyway? Not to mention that SSDs are much faster, more reliable, more durable and more energy efficient.

http://u99bet.net/ excpomelo

With HAMR, “high density” magnetic compounds that can withstand further miniaturization are used

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2016 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.