Researchers add a dash of salt to hard drives for capacities up to 18TB

Running out of disk space for your movies and music? There's good news from Singapore. Researchers at the Institute of Materials Research and Engineering have found a way to increase the density of hard disk storage by six times over current drives, all thanks to salt.

While he was a graduate student at MIT, IMRE's Dr. Joel Yang developed a new electron-beam lithography process which uses sodium chloride to enhance the developer solution. He and his research team at IMRE, in collaboration with researchers from the National University of Singapore and the Agency for Science, Technology, and Research's Data Storage Institute have refined the process, and have been able to fabricate magnetic storage media with a density of 3.3 terabits per square inch.

Yang's approach is based on bit-patterned recording (BPR), which uses a disk surface with magnetic clusters, or "islands," that prevent the bleeding of data written to one bit of storage to another through supermagnetic effects. The increased density isn't because the process generates smaller magnetic grains on the disk surface. Instead, the sodium chloride allows for more efficient distribution of them through “nanopatterning,” packing grains together in 10-nanometer clusters that form each bit. “What we have shown is that bits can be patterned more densely together by reducing the number of processing steps,” Dr. Yang said in a statement published by IMRE.

The new method also eliminates some of the usual manufacturing processes associated with creating disk platters. In the abstract of the paper Yang and his team published on the results, he wrote, “By avoiding pattern transfer processes such as etching and liftoff that inherently reduce pattern fidelity, the resolution of the final pattern was kept close to that of the lithographic step.”

Perhaps the biggest advantage of Yang's approach is that it uses the same sort of equipment and technology currently used to create disk media. Other efforts to improve magnetic storage density, such as thermally-assisted magnetic recording (also know as heat-assisted magnetic recording, or HAMR) and nano-contact magnetic resistance can in theory generate much higher disk densities, but require new manufacturing equipment and are consequently much more expensive to produce.

76 Reader Comments

so, not only are they going to get bigger, they're going to get even cheaper. more of the same (which is good!)

i always wonder if these researchers have drives in their personal pc's with this kind of density. does Yang have an 18TB drive held together with duct-tape and dreams installed in his opensolaris nas that he watches his blu-ray rips from in the lab? i sure hope so.

At some point you need to ask if there is any consumer use for a drive such as this.

Even if there isn't, I'm sure data centers would snap them up. SAS never really caught on in the "consumer" space either.Imagine being able to cut the number of big storage boxes in your farm to a fifth, while still having enormous redundancy.

Man, if salt alone can do that imagine what will happen when we have the technology to integrate bacon into our hard drives...

Ding ding! We have a winner!

Back to serious now. I love these stories about seemingly common things turning in to great scientific discoveries. Who would have ever thought common table salt could be a catalyst for an exponential increase in storage space.

Great, since NTFS can only handle partitions of up to 2TB I get to have 9 partitions on one drive.

That's entirely incorrect.

It's a limitation of traditional BIOS and Master Boot Record volumes. The 2TB limit is only for a bootable volume using MBR. NTFS supports GPT volumes up to something like 256TB, but you can only boot Windows from a GPT volume on a machine with EFI. If it's not a bootable volume, you don't have to have an EFI-based machine to use it.

Great, since NTFS can only handle partitions of up to 2TB I get to have 9 partitions on one drive.

You're confusing an MBR partition table limitation (solved by using GUID partition tables or a "GPT") with a file system limitation. The file systems in OS X, BSD, Linux, and the like have the extact same limitation.

The NTFS file system (excuse the redundancy) is limited to a volume size of about 256TB, see http://en.wikipedia.org/wiki/NTFS#Limitations, at least with respect to Windows XP and Windows Server Standard 2008. If you peruse some of the relevant forums at Ars, HardOCP, AVSforum, and the like you will routinely see RAID-hosted NTFS volumes of 16TB or more, where you must manually specify a >4KB cluster size in order to enable the larger volume size limit.

I hope that in addition to OMGHUEG drives, that this will help 15k and 10k rpm drives in the 1-5TB range possible - that would make a huge difference in being able to get higher performance out of storage (arrays) without having to go to a hybrid or full SSD approach.

My personal wish is that they'd stop trying to focus on making hard drives bigger, and make them FASTER.

SSD's are great and all, but they have huge cost per storage volume problems if you need to buy anything bigger than a few hundred megabytes - for example, for some ESXi servers.

jdietz wrote:At some point you need to ask if there is any consumer use for a drive such as this. That said, I would like to get a 4TB drive when they are out in mid 2012.

We have been saying this for long enough to know we never have too much space. Once we start saving Full HD 3D movies along with games (Starcraft 2 for example takes about 10 GB and it is not the most demanding game out there) the space starts to run out pretty fast. Add the multi-megapixel 3D images that will come to consumer cameras and software occupying ever more space. Then, after that, factor in the next move, which is 4K video, then factor in 4K 3D video. Trust humanity on this one, if we have space, we'll figure out how to fill it.

Is NTFS getting checksum (zfs-esque) soon? Otherwise, the silent corruption is going to be huge! ( I keep thinking that around 6TB is where it becomes a problem for unraided devices --- which raid 5 sees earlier for some reason. Can't find verification for that so probably me misremembering someone talking out their arse.)

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.