Intel’s 3D NAND has 32 layers and 256Gb per die

During an investor webcast this afternoon, Intel revealed that it will offer SSDs based on 3D NAND in the second half of next year. The three-dimensional tech is the product of the firm's joint flash venture with Micron. It stacks 32 planar layers to deliver 256Gb (32GB) of storage in a single MLC die. The 3D NAND can also be packed with three bits per cell to hit 384Gb (48GB) per die.

According to Rob Crooke, senior VP and general manager of Intel's non-volatile memory group, the 3D NAND will enable 10TB SSDs "within the next couple of years." The technology can also squeeze 1TB of storage into a mobile-friendly form factor just two millimeters thick.

Crooke characterized the 3D tech as having a "breakthrough cost" but didn't provide more specifics. He did, however, suggest that Intel may not manufacture the 3D NAND itself. "We have the ability," Crooke said, but Intel will only bring production in-house "if it makes sense." Benefiting from the cost breakthrough presumably requires a substantial capital investment up front.

The 3D NAND's planar layers are built with a coarser fabrication process than the latest and greatest 2D flash. Those layers are pierced by four billion "pillars" that run vertically through the die, but Intel isn't ready to disclose specifics about the underlying process geometry.

Crooke effectively demoed working silicon by running his presentation off a prototype drive. Intel hasn't decided which market segment will get the first taste, though. Datacenters, corporate clients, and PC enthusiasts top the list.

Samsung's 32-layer V-NAND has 86Gb per die in MLC mode and 128Gb in a TLC config, giving Intel and Micron a substantial density advantage—at least per die. That said, a new generation of V-NAND should be ready by the second half of 2015. It will be interesting to see how those chips stack up against the 256Gb monsters Intel has cooked up with Micron.

You May Also Like

Intel’s just stacking thinned die with TSV, right? That means it’s no cost savings – probably *more* expensive because handling is more delicate and testing would have to be quite stringent.

It would be very interesting if someone found a way to build chip-on-chip using epitaxy in some way. Reliability would be a problem, but probably workable (via ECC/spares)

kamikaziechameleon

5 years ago

Cost per gigabyte decline is quite exciting 🙂

internetsandman

5 years ago

Waiting for quantum storage at 1TB per die

Pwnstar

5 years ago

Can you wait 15 years?

BIF

5 years ago

Happily.

Jason181

5 years ago

[quote

jjj

5 years ago

You just repeat the marketing BS with no filter.

“stacks 32 planar layers”- this suggests that they just stack normal layers of 2D NAND and while there is almost no info on what Micron will do,that’s not what everybody else does is it?

” substantial density advantage” – based on what, do you even know on what process the Intel chip is?Then you compare a shipping product with a next gen product, compare apple to apples and go ask Samsung for what’s their 2015 product like.

The 1TB in 2 mm and 10TB SSD are statements magic (for fools). You can do that now easily.The problem is the cost.
In their presentation Intel said 2xbits per die and that would suggest the density gains are similar to a full shrink not more. If you factor in that they might compare it with their own 16nm dies and that those dies are not very efficient, then 2x is not that great at all.

godforsaken

5 years ago

“substantial density advantage”… based on what? really? so, it cant be based on the fact that a single chip holds more than three times the amount of flash memory as samsungs layered chip?

Hattig

5 years ago

The area of the die isn’t given.

Samsung’s V-NAND dies are small, so whilst they “only” have 86Gbit per 32-layer die, they do it on a small die, on a cheap process.

Intel aren’t really giving out this information, so for now, we have to treat their claims carefully.

The underlying principles are the same I imagine, a single die going through multiple exposure cycles with polishing to level the top of the die after each cycle, so they can get 32-layers exposed.

Samsung are aiming for many many more layers in the future. Samsung are using a cell technology that is very reliable and long-lived, even in a TLC configuration. Samsung detailed it all to the media and lots of articles were written at the time. Intel’s tech is marketing so far.

MadManOriginal

5 years ago

There’s nothing from stopping Samsung from putting out press releases. Or if they had the industry clout of Intel, they could host things like IDF.

Pwnstar

5 years ago

[quote

Jason181

5 years ago

Perhaps controllers will increase the number of channels when they come up against an interface they can’t already saturate. Most of the size increase will of course be due to the increasing nand density. I very much hope these 3d nand dies make ssds more affordable (I have one, but it’s small compared to storage drives).

just brew it!

5 years ago

Based on available info it sounds like it is identical (or at least very similar) to what Samsung does for their V-NAND.

The “substantial density advantage” is per chip package. Since the layers are microscopically thin, the more layers you can stack the denser you are per chip package since you only increase the size of the chip package marginally.

Hitting 1TB with 128Gb devices requires 64 chips. Doesn’t sound all that practical to me for a 2mm mobile device.

Ushio01

5 years ago

With the problem that all the semiconductor fabs seem to be having with any sub 14nm process will the diversion of development money to 3D Nand and Dram which allow higher capacities on older process nodes slow even further the CPU/GPU side.

Since even if the fabs themselves are different the companies that manufacture all the sub components of the fabs will have less potential customers?

Also can this 3D layering of transistors be used on GPU’s and CPU’s? or is it only viable with simpler Nand and Dram?

MEATLOAF2

5 years ago

I’m pretty sure they can stack CPU transistors in layers, similar to what they are doing with the NAND (after some R&D of course), but you would probably run into some serious heat problems. (I vaguely remember reading up on this, and heat appeared to be the limiting factor).

Assuming that heat is the main concern, I’d be interested to see something like taking the parts of the die that don’t produce much heat, and layering them on top of, or below, the cores themselves. That could leave room for larger cores, or reduce cache latency if the cache is directly above the core that is accessing it. It could lead to some interesting chip design.

UnfriendlyFire

5 years ago

OR, Intel could implement IBM’s concept of having micro-pipes running between the stacks for water cooling. Aka, directly water cooling the silicon.

They could also mount the silicon stacks vertically and place copper strips or ultra-thin heatpipes between the stacks.

MadManOriginal

5 years ago

We might see dies with heatsinking on both sides. Intel is already part way there with the Broadwell Y series packaging.

MEATLOAF2

5 years ago

I like the vertical stacking idea. I already read about the micro pipes, and there were even diagrams, I would love to know how far along they are on that now, it looked promising.

For me, the future of CPUs depends on being able to stack them. We are already approaching the limit in terms of how small we can shrink them. I’d love to not see “we stacked 2 CPUs on top of each other”, and more of a new way of designing CPUs that involves taking advantage of all the extra space, both vertically and horizontally, that is enabled by it.

They could for example have all the I/O stuff on one layer, cache on another, cores on another, and placing them strategically within their layers to reduce latency, or increase I/O bandwidth with the extra space available.

eofpi

5 years ago

Caches do indeed draw little power for their die area. It would be great if they could be stacked under the powerhungry parts.

It’ll take some engineering work to deal with the higher power density, and probably the return of soldered heatspreaders, but the higher chip count per wafer should still make it cheaper to produce.

bwcbiz

5 years ago

The big issue with doing this for CPUs is the heat dissipation. CPUs get plenty hot already with single-surface layer. Put multiple layers of circuits on a CPU and you have that much more heat to dissipate with no increase in surface area. Think of the size of the heat sinks you’ll need for that!

NAND doesn’t dissipate anywhere near as much heat as the circuits in a CPU because a smaller portion of them are being used at any given time, and (I believe) there are technical differences in the way transistors are physically fabbed in NAND vs. CPU as well.

I could see the 3D technology being used for non-volatile stuff in a CPU like the firmware though.

MadManOriginal

5 years ago

Also, Intel raised 2015 guidance and dividend, leading to a solid pop in the stock today. Clearly a failing company : p

Pwnstar

5 years ago

Clearly.

dragontamer5788

5 years ago

[quote

chuckula

5 years ago

Beware: When you stare into the Typo, the Typo stares back into you.

LostCat

5 years ago

Well that sounds unpleasant. People guts probably look gross.

Grigory

5 years ago

Not the ones of your enemies.

willmore

5 years ago

Interesting that 32 layers * 4 billion(2^32) is 128 billion which, when there are two bits per cell is exactly the 256Gb they mention. That sounds a lot like V-NAND.

As long as we get better endurance than planar <20nm FLASH, I’m all for it the market competition heating up.

chuckula

5 years ago

[quote

sschaem

5 years ago

Nice annoucement of what’s to come.

The HDD era is coming to an end, even in the server space. I give them another 4 years .

I wonder how many TB one can store using 3d TLC hand in a 3.5 form factor.
If 10TB is possible in 2.5.

UnfriendlyFire

5 years ago

And then a HDD manufacturer comes out with 10-40 TB HDDs at the fraction of the price of 10 TB SSDs.

the

5 years ago

Unfortunately HD manufacturers are well into the exotic range to further increase capacity. They’re doing things like filling them up with helium to inch further in density. A 10 TB hard drive seems feasible but it wouldn’t be cheap even down the line. Of course, it’d still be cheaper than the SSD alternative initially.

Waco

5 years ago

It doesn’t really matter. Flash has an extremely long way to go before it’s suitable for bulk storage. For consumers? Sure. For large-scale storage? I’d get murdered if I even mentioned enterprise-grade drives as a viable option. I’m even looking to [i

the

5 years ago

I believe that the HGST 10 TB drive was only announced with actual shipments happening in 2015. The 8 TB model is shipping but for nearly $1000. Considering the current pricing trends in flash, the He8 is 4 times as expensive as flash. I’d expect the delta to halve over the course of the next year.

As far as reliability goes, flash isn’t quiet suitable for back up/archival storage due to its ability to discharge over extended periods of time (manufacturers claim ~10 years of data retention). Sadly, that doesn’t beat tape. The write endurance problem is also an issue for a backup NAS/SAN usage as large amounts of data could be written to it on a daily basis. A bit more over provisioning helps here but the drives will likely need replacing in a few years (arguably sooner than a spinning disk array).

Though for primary local storage, SSD all the way even in the enterprise market. The only down side is the insane prices vendors charge for it.

Waco

5 years ago

They’re not shipping to general consumers but they’ve been shipping to select partners for the better part of a year.

The discharge problem isn’t so much of a problem when the firmware on the drives is supposed to keep the data fresh (assuming they’re powered on).

Tape just doesn’t scale in bandwidth nearly quickly enough for an even slightly active archive with large (PB+) files. 🙁

Philldoe

5 years ago

Would you trust a mechanical drive to store 10TB of data?

continuum

5 years ago

Forever? No.

But then, I wouldn’t trust an SSD forever either. 😉

Pwnstar

5 years ago

You should always backup data you want to keep. This even applies to SSDs.

Klimax

5 years ago

Bit hard to backup video files, each costing about 10-15GB of space…

Kurotetsu

5 years ago

Not hard, just expensive. I would ask why your files are such an absurd size, but I imagine the answer will be worse than not knowing.

Klimax

5 years ago

Archival of videos taken with video camera with zero or at best near zero loss of quality. That means either lossless or maximum quality H264. (Lossless is too big for archival so I use it only as intermediary codec with H264 high bitrate profile used for archival)

And also intermediary files while working on said video can account for hundreds of GBs. (Especially before removal of noise)

Did you think that 1080p video is small before being encoded for distribution?

Waco

5 years ago

Not really. Drives are cheap. Buy a ton of 3 TB drives, build up a redundant box for them, and keep a cold copy as well.

If you trust *any* storage mechanism on its own you will end up losing data.

dragontamer5788

5 years ago

The $20 per 1.5TB [url=http://www.tapeandmedia.com/quantum_lto_5_tape_ultrium_5_tapes.asp?source=Froogle&REFERER=Froogle&gclid=CJWL_ZeGjMICFZAF7AodWjAAGA

Waco

5 years ago

LTO6 drives aren’t cheap, nor are LTO5.

Tapes fail all the time. Sure, they’re “reliable”, but without software to do some sort of protection on them you WILL lose data. I have three six-frame T950s to deal with…and believe me, I’d rather it be disk.

dragontamer5788

5 years ago

Sure, tapes fall all the time, but you can always clone a tape if you really needed reliability from a long-term archival perspective. I mean, yeah, you can clone hard drives too, but hard drives are a bit more expensive. (~$100 each for consumer 3TB HDD, more for an enterprise-class one)

Also, disk is more convenient than tape. But SSDs are more convenient than disk. At 100TB and above, tape simply becomes cheaper than disk, so you might as well use tape.

That said, most people don’t need 100TB of backup data, so disk backup / archival is sufficient for most.

Waco

5 years ago

Replication as a backup strategy is great when you’re talking about small quantities of data.

Shambles

5 years ago

HDDs aren’t cheap. In fact they’re still more expensive than they were 5 years ago if you look at models that weren’t cutting edge at the time. Sales were routinely selling 2TB drives for $60 bucks. The only time I’ve seen them hit $30/TB since then has been when a retailer has screwed something up. The HDD industry is a duopoly that is stagnating. The SSD market has plenty of competition. Their R&D is roaring and will easily pass spinning drives in the next decade. Magnetic drives to my children will look the same as VHS tapes do to me now.

And good luck re-silvering an array of 10TB drives. You’ll have to have to dedicate half your drives for parity drives since the failure rates/drive will become so ridiculously high and re-silvering time will start to be counted in days not hours.

(Canadia prices)

Waco

5 years ago

Erasure coding makes it a lot easier to deal with massive quantities of drives. RAID won’t cut it. ZFS works for “small” arrays of under 100 TB.

Klimax

5 years ago

Yes. At least you can have experts try to recover data. You can’t with SSDs.

Vaughn

5 years ago

One should have a proper backup system in place instead of relying on spending thousands for a data recovery place to pull data from the platters. Yes you can’t recover the SSD but I don’t see that as a great reason to stick to hard drives.

Klimax

5 years ago

Size. Currently I use 6TB WD Green, 4TB WD Black, 2x 1TB WD Green, 512G B Velociraptor.
Capacity is main reason. And sine there is no way yet to back up (cost of extra SAN would be quite big) being able to have it recovered is not bad either.

Next storage upgrade would be some kind of PCIe flash in 1TB range (maybe DC series by Intel)

BTW: Speed of HDDs is not that low either… And then I tend to ran into CPU bottleneck anyway…

sschaem

5 years ago

The capacity growth slowed down in the past 6 years.
(We had 2TB drive in jan 2009… )

To better example what happening :

1996 to 2001 : 2GB to 40GB : 20x growth over 5 years
2001 to 2006 : 40GB to 500GB : 12x growth over 5 years
2006 to 2011 : 500GB to 3TB : 6x growth over 5 years
2011 to 2014 : 3TB to 6TB : 2x growth over 3 years

I dont see the industry changing the trend.

In contrast SSD capacity is moving very quickly and price dropping.

HDD price have also pretty much also stagnated.
I believe I got my 2TB drive many years ago for $79.

So I based my prediction on all those factors.
HDD price stagnating, capacity growth decelerating to maybe 3x over 5 years,
SSD capacity skyrocketing, price plummeting each year.

Those seem to be the trends.

MadManOriginal

5 years ago

4 years is too little. HDD will retain a price/GB advantage for longer than that.

sschaem

5 years ago

The price of HDD stagnated , we are still at around ~$40 per TeraBytes.
SSD about ~$450 per TB. So a 10x delta.

If HDD price continue to stagnate, and SSD drop by about 2x every 18month ,
after 4.5 years SSD price would have dropped to $56 per terabyte and capacity will be also close because we also see a HDD deceleration in capacity.

Also in most cases, if you have a 2.5″ 10TB SSD for $600 or a 3.5″ 10TB HDD for $400, the price difference would be worth considering for data centers.

SSD could also face stagnation. But from this announcement, and the progress in fabrication the trend seem strong for the next 4 years.

the

5 years ago

Price/TB has gone up due the exotic nature of the 6 TB (~$650) and 8 TB (~$1000) drives using helium instead of regular air. A 4 TB drive using air on the other hand can be had for ~$170 and Seagate has a 5 TB model for around ~$300.

You are correct that spinning disks hold a price advantage but at the rate flash prices are going down and new technologies have been announced for continual density improvements, it is only a matter of time before they eventually become cheaper. Hard drive prices will also come down but it is at a significantly lower rate and I question just how fast the helium based drives will decrease in price due to their exotic nature.

jihadjoe

5 years ago

Hard drive capacity has some ways to go yet. I remember reading an old article about how “salting” the platters can get a 6x density increase, plus the fact that each bit is still stored using [s

the

5 years ago

They can do something like this today if they wanted to. The problem isn’t on the NAND end but with controllers and cost. There has been a trend to produce controllers with fewer NAND channels while using higher capacity NAND. This is why some new low capacity drives are slower than the previous generation alternative due to the previous model using more NAND channels.

Even if there was a controller that’d utilize >10 channels, the main benefit would be raw capacity. SATAe would be necessary for any performance gains here as 6 Gbit SATA has can be saturated in bursts for some time now.

Lastly there is the cost factor. With prices of consumer drives floating around $0.50 USD per GB, a 4 TB driver would set a person back $2000. There isn’t much of a market for consumer storage at those prices. Those willing to spend that kind of money is in the enterprise market where SSD’s hit the trifecta of better speed, lower latencies and better reliability than hard drives that prices there really haven’t come down. Talking to an enterprise storage vendor is like walking down a dark alley at night wearing a glowing shirt that says ‘mug me’. Don’t expect to leave with your wallet and/or kidneys intact.

entropy13

5 years ago

[quote

the

5 years ago

Yeah, you get to keep the shirt so they know who mug next time you need more storage space. They’re forward thinking like that.

Krogoth

5 years ago

It is too premature to say that until we have a better idea of scales of economics associated with 3D NANDs.

It could easily end-up being only viable for enterprise customers.

blastdoor

5 years ago

I wonder if the HDD manufacturers are just stuck in a rut, kind of the way display manufacturers were for many years. We didn’t languish at 1080p TN displays for a decade because nothing better was possible — we languished because the industry was stuck in a rut, put there by OEMs with razor thin margins who were afraid to do anything different than what everyone else was doing, because they had no margin for error in trying new things.

Could be the same for HDDs.

The thing with HDDs, though, is that there just might not be very much demand for higher capacities outside of server rooms and data centers. Of course, server rooms and data centers are important markets. Are those markets big enough and profitable enough to fund the R&D and capital investment necessary to move HDDs forward in a big way? I have no idea…

bwcbiz

5 years ago

At least until we start loading 64k holographic 3d “video” up to Yutoob.

just brew it!

5 years ago

I think it will take a little longer than that to completely supplant HDDs in the data center, since cost/GB for SSDs will still be higher for the foreseeable future. But yeah, HDDs will probably be all but gone from new consumer systems by then.

WaltC

5 years ago

Silliness profound..;) Whenever a company announces a new, as-of-yet-unproduced-product, along with its theoretical capabilities, someone always comes along and says, “Well, that’s it for this or that mainstream technology! It’s doomed! Hang on for dear life!” It’s not quite as egregious as Intel employees spouting “We’ll be hitting 10GHz before you know it!” in a few ancient Pentium 4 interviews (repeated by John Carmack more than once, too, as I recall), but it is still pretty funny…;)

As with any technology it will always boil down to economies of scale–if volume-equivalent HD production weighs in at 1/4 to 1/10 the price of the new tech then there’s no reason for HD production to cease at all. It could well increase, actually, as competition stirs new designs and capabilities, etc.

Every time we hear that this or that technology has “hit a wall” the wall comes crashing down in near record speed, such that I now believe those articles are just PR material designed to dra attention to new and upcoming product iterations and designs.

What obsoletes technologies is always interesting to ponder. For instance, what obsoleted the CRT monitor was not the per-pixel superiority of the LCD–it was the incredible difference in sizes and weights (within the same screen sizes) that made the *shipping and handling* of LCDs cost just a *fraction* of what it costs to ship and handle CRTs. 20″ CRTs weighing in at ~60 pounds and shipped in boxes 14″-18″ deep were no contest next to shipping 20″ LCDs–practically overnight the CRT simply disappeared (as we all know.)

If the HDD product line does indeed vanish I can pretty much guarantee it will be because of a much more interesting, and probably not very obvious, story–a story much more substantial than a pre-shipping product announcement from Intel…;)