SanDisk’s collosal 4TB SSD: Does this mean SSDs will soon provide more storage than hard drives?

Share This article

Huge, cheap SSDs arriving no time soon

Unfortunately, all current evidence suggests that huge cheap SSDs that can keep pace with hard drives in terms of arreal density aren’t going to happen any time soon. Today, the vast majority of NAND flash is built in a planar (two-dimensional) structure. The same problems that have kept next-generation processors from scaling to ever-smaller sizes are keeping NAND flash stuck as well. NAND cell sizes aren’t falling the way they used to, which means the amount of NAND manufacturers can physically pack into the same area isn’t shrinking very much.

We’re at 20nm now. The “reduction” isn’t scheduled to fall much further.

The alternative to shrinking physical cell sizes is to hold more charge levels (and thus more bits) per cell. Again, we’ve seen some moves to TLC NAND in the consumer space, with Samsung’s triple-level cell memory, but such memory is slower and can sustain fewer write cycles before failing. Better TLC NAND would be a huge boost for SSD densities, but no one is even talking about QLC (quad-level cell) NAND — there’s just no way to store voltage at that fine-grained level while retaining enough write cycles to deploy the tech effectively. (Read: How long do hard drives actually live for?)

If we can’t make NAND cells smaller and we can’t store more data per cell, can 3D NAND — NAND built on its vertical edge — save the day? Once upon a time, it looked like the answer was “Yes” — Samsung was enthusiastic about its own plans for this market segment. Unfortunately, real-world data coming back suggests that the benefits of 3D NAND may have been overstated — at least, relative to the early-ramp difficulties. Samsung was forced to fall back to a 40nm production node for its 3D NAND, and has announced that it intends to hold steady at that node for the next five technology generations. It’s also had to use very large NAND cell sizes to overcome certain manufacturing problems related to material deposition and it’s not clear if the company’s manufacturing strategy with V-NAND can scale down to smaller nodes effectively.

Right now, Samsung’s 40nm 3D NAND cell size is 5x the size of a conventional 40nm NAND cell and 30x the size of a conventional cutting-edge 16nm NAND cell. That means that current 3D NAND, even at 24 layers thick, isn’t competitive with cutting-edge planar.

That scaling is critical to any long-term cost equalization between NAND and HDDs because stacking multiple dies on top of each other is precisely how 3D NAND was supposed to continue to improve its own cost structure. Analysts and industry experts have begun quietly pushing out dramatic cost reductions for SSDs, as in the recent graph we showed where HDDs retain a significant cost advantage in the foreseeable future.

Towards NAND as “big enough”

I suspect that the majority of end consumers won’t be much bothered by the slowing rate of progress. NAND prices and capacities will still improve over the next few years, just not necessarily at the meteoric rates of the last half decade. There’s enough headroom left in existing technology that we should see TLC-based 512GB drives pushing below the $250 mark in the next few years. I suspect that this represents a “good enough” point for the overwhelming majority of users — even if you regularly install games that require 30-50GB of storage, 512GB of storage gives you some elbow room.

Current market trends suggest that we’ll see a shift towards high-end PCI Express-based NAND for the enthusiast market and larger, slower NAND pools for the consumer and enterprise spaces. Increasing densities will give companies like SanDisk room to bring down the price on commercial drives like this one, but with 4TB hard drives selling for ~$200 online, it’ll be a very long time before we see cost parity between SSDs and HDDs.

Tagged In

Post a Comment

Ivor O’Connor

Maybe somebody here could answer a question that has always bothered me. We keep seeing SSD storage double and since it is entirely digital why doesn’t the streaming speeds double? On a 4TB drive like this I would hope to see streaming data at the rate of 2GBs per second or greater. Putting 32 128GB SSD drives in a RAID 0 configuration might cost the same as this one 4TB drive but the performance would be at the limits of what they MB could take. I’d like this type of performance as one scales up the size of SSDs but it is not happening for some reason.

JD Rahman

The first limit is the SATA interface, which maxes out at about 700MB/s or so. So it doesn’t matter how fast the SSD is, the interface is the bottleneck.

PCI Express based SSDs are alternatives. Such as the FusionIO (1.4GB/second read, 1GB/second write), HP Z Turbo Drive (over 1GB/second read/write).
These PCI Express solutions can then be set up in a RAID 0 and the performance does double.

The second limit is the controller and firmware on the SSD itself. So far, the controller/firmware has been optimized to meet the SATA interface capacity. Making it faster would just be wasted. I suppose this will change as the interface improves.

Thanks Rahman, I did not realize the individual SATA interfaces were so slow.

So basically then it comes down to getting a good PCI Express RAID controller and stringing SSDs off it. Something that works on Linux Mint and BSD.

Joel Hruska

Not exactly.

Most motherboards could support PCIe 2.0 x4 slots. That’s about 1.6GB/s of bandwidth. So you could drop 2-3 of those in a single configuration, but that’s going to be the limit. You need PCIe cards, not a PCIe RAID controller — the RAID controller will still connect to the drives via SATA.

Ivor O’Connor

But the data getting to the RAID controller will be say a multiple of the number of SATA SSDs attached to it. Or so I thought. Are you saying that the SSDs are all limited to a max transfer of 6Gb/s?

They’re not the same. You don’t use a RAID controller on PCIe for this. You use NAND flash mounted on PCIe cards and then you create a RAID in software if you want to use them. The card may *mount* a RAID-capable controller, but you don’t wire this up in the same fashion.

I think I’m making this more confusing than it needs to be and need to take a step back by asking a higher level question. How would one get 3GB/s+ transfer rates on a high end consumer desktop by putting multiple SSD drives together?

Probably all us consumers want performance better than a single SSD. And so we look for a way of throwing a bunch of SSDs together until we are either happy with performance or have no more money…

Oh yes. Thank you for all your information Joel. You always do a great job.

Joel Hruska

” How would one get 3GB/s+ transfer rates on a high end consumer desktop by putting multiple SSD drives together?”

You don’t. Not yet. That’s more bandwidth than any consumer I/O is capable of reaching. Sustaining >3GBps of bandwidth would require PCIe 3.0 x8 or PCIe 2.0 x16. (PCIe 3.0 x4 would get you to 3.2GB/s of theoretical bandwidth but you’d never push that in real-world scenarios).

I’m not sure if any SSD on the planet is fast enough to push that much bandwidth. Keep in mind that at 3GB/s, you’re running NAND at low-end DRAM memory bandwidth levels; my AMD A4-5000 whitebook only has about 3GB/s of main memory bandwidth when running in battery mode.

You’d be looking at a five-digit cost of hardware, easily.

I suspect this is a scenario where it’s far cheaper to flip the workload around and use DRAM instead of NAND. Instead of spending $20,000+ on an SSD to push 3GB/s, why not spend that money on a server with several hundred GB of RAM running at 60-90GB/s and cache the data?

Ivor O’Connor

I’m not sure if I came across correctly. I keep thinking you don’t buy one single SSD as is the subject of this article. So I want to avoid buying something like this collosal 4TB drive and instead buy many smaller ones at a much better price per GB. Then attach these to hardware RAID so the performance would be N x (500MB/s). So in the end I would have much better performance than this one SSD drive, a much better price, and room to expand. This must be possible for us hobbyists at the consumer level? I’m hoping something like this but with normal SSDs. https://www.youtube.com/watch?v=27GmBzQWwP0

Joel Hruska

Ahah. Well, you can’t get faster SATA than 600MBps, so that’s largely wasted. You have to go to PCIe for that.

Let me cut to the chase. You could build a RAID 0 array of two PCIe NAND cards for theoretical performance of around 1GB/s using off-the-shelf hardware today with far better performance than this 4TB SSD and much better cost/GB.

You’ll still pay $1000 – $2000 for the privilege depending on which PCIe solutions you choose, but that’s what you can currently do easily.

Ivor O’Connor

So it doesn’t work the way I thought. I was so hoping you could put 4 SATA-IIIs each getting 500MB/s onto a RAID controller hooked on to a PCI Express Gen 2/3 slot to get close to 2GB/s and then possibly a second on and then do software raid to get a total of 4GB/s. I’m gathering the interface from the RAID card to the SSDs do not combine the speeds of each SATA device. So no asynchronous calls across all the SSDs attached to the RAID device but serial synchronous calls instead limiting the RAID card to just 600MBps. :(

Joel Hruska

No consumer hardware even approaches 2GB/s on PCIe to start with, so you’re cut off at the pass there.

Now, there may be some specialty products that can interleave at the SATA port level — you hook two SATA ports to a single drive and the system uses both ports simultaneously — but I think that everyone shrugged and said: “Just use PCIe” at that point.

Remember, NAND itself isn’t really fast enough to push that kind of bandwidth. The very fastest PCIe SSDs can break 700MB/s in ideal tests, which means you could maybe hit 1.4GB/s in RAID 0. There may be SLC SSDs which are faster, but nothing in the consumer space.

I think it’s fair to note that you really don’t see benefits from this kind of ludicrous speed increase. I do PC Magazine’s SSD reviews.

The benefit of an SSD over an HDD is far greater than the benefit of moving from a 2011 SSD to a 2014 top-line PCIe SSD, even though you’re still doubling or tripling throughput. The problem is, on a desktop, most workloads are latency sensitive — so getting rid of disk spinup is hugely noticeable.

But the only time you’ll really notice the difference between two SSDs is if you’re transferring hundreds of GB of data.

Is then the problem is getting a PCIe 3.0 x8 RAID card, which there are about 5 of at NewEgg, that on paper might work. The average price being about $400 or so. That price still seems consumer level for the average hobbyist.

With some combination of the above two and maybe 8 SSD drives attached to the card using RAID 0 what would you get? Not that such speeds would be needed or noticed above and beyond a regular SSD. However, if possible, it might offer much better pricing, speed, and flexibility than buying a collosal 4TB drive.

Joel Hruska

Sorry, I missed this.

The problem isn’t that MOTHERBOARDS don’t support PCIe x8. It’s that the consumer-facing SSD cards don’t use more than PCIe 2.0 x4. These offer about 1.6GB/s of theoretical bandwidth and top out at about 1GB/s in real-world tests:

Apparently you can dynamically grow your RAID sets. There are probably qualifiers in the small print. I’m going to look for the manual and read up on it. I’m not in market at the moment but a solution like this might work for life. Maybe next year I’ll buy one. (Or later this year.) I’ll keep my eye open for deals.

Joel Hruska

I’m linking you to PCIe NAND flash drives, and you’re responding with links to RAID controllers. It makes me wonder if you understand the difference.

Here’s a review of the card that actually includes IOMeter test results (which are incidentally fantastic and break 6.6GB/s)

2). Good thing about RAID is that it is RAID and you can choose anything from RAID 0 to RAID 60 with hot spares.

I started by asking if a RAID configuration would be cheaper, faster, and more redundant than a 4TB drive. You were extrapolating from “800GB SAS SSD retails for $6600″ which looks depressing. A RAID 60 built with the ET drives and 4 hot spares would be half the price, twice the size, possibly 10x the speed, and be totally secured from disk failures. (See https://www.memset.com/tools/raid-calculator/)

You sure you want to say “this level of performance is simply and utterly impractical”?

Joel Hruska

The vast majority of our conversation on this topic has concerned very high-end storage performance and (mostly) very limited RAID configurations. I kept the conversation consumer-focused because you’ve discussed consumer hardware.

For example: The controller you linked in your previous response is a 24-port SAS controller — so I discussed the cost and expense of SAS drives. Now you’re switching to SATA SSDs and asking me about *their* performance and cost curves.

First, you need a 24-port SATA card, not a 24-port SAS card.

Next, there’s the question of whether the drives in question are designed / optimized to work in RAID arrays. I can tell you in total honesty, I would not trust a 24-SSD array in a RAID built with low-end consumer hardware.

When we were discussing this as a consumer solution, I assumed that you wanted to keep price tags to consumer levels (meaning under $1000). Discussed as an enterprise solution with appropriately fault-tolerant drives, you’re talking about $20,000 and up.

But I will grant that *if* you can find a SATA card with 24-ports on it (almost everything in this space is SAS), and if you’re willing to risk using low-end drives not validated in RAID configurations, and if your concept of “practical” stretches into the $4000 range, then yes — you could have a really nice SSD solution.

Or alternately, you could just buy a few hundred GB of RAM and use a RAM drive. ;)

Ivor O’Connor

“For example: The controller you linked in your previous response is a 24-port SAS controller — so I discussed the cost and expense of SAS drives. Now you’re switching to SATA SSDs and asking me about *their* performance and cost curves.”

If you did a RAM drive you’d loose that big investment when you upgraded your MB. If you stick the RAM drive off PCIe then you have the same bottleneck of gen II vs gen III and not much of a speed gain over a RAID SSD system.

Even if the drives were crappy would it matter when in a RAID 60 config with 4 hot spares? There are numerous RAID calculators and they seem to give data failure rates of once in a million years regardless of how bad the SSDs are. (Assuming you are in a RAID 60 config with four hot spares.)

Joel Hruska

So let me preface this by saying I’ve never tried to build a RAID at the 16-24 disc level, much less RAID 6
0. And yes — it’s entirely possible that “enterprise” SSDs are just called that for marketing terms — but there are some good reasons why I doubt this is true (even when there’s conflicting data about how useful enterprise HDDs are compared to desktop flavors).

NAND storage controllers are a great deal more complex than your typical HDD. A NAND controller is essentially a small CPU — it makes decisions about where data is stored (in cache DRAM or on-die), when to aggregate writes, when to perform write leveling, and how to manage the health of the onboard flash. Hard drive controllers don’t really *do* most of that, or not to the same degree. A hard drive doesn’t have to write to specific sectors to avoid wearing out the platters, but NAND flash controllers *do* do that.

It’s therefore plausible that enterprise NAND is tuned differently (at the BIOS and controller level) to ensure that write-leveling is handled properly, that TRIM is still carried out (it’s typically non-functional on consumer SSDs when in RAID 0, though hardware RAID cards may have solutions for this), that garbage collection is done at the proper time, and that drives don’t wear out too quickly.

It’s also possible that this is only an issue with some SSDs but not others. But the question of SSD RAID *is* more complex than the HDD equivalent when it comes to the health and longevity of the NAND. Intel’s enterprise SSDs are binned for the best of the NAND that the company manufacturers. Then the firmware is tweaked over and above that point.

Take a look at this article from Anandtech on testing NAND I/O consistency for a good look at one variable. You don’t really have to *worry* about this for HDDs.

All good points. How these RAID cards handle SSD TRIM should be very prominently displayed in all their literature. Yet I don’t see that and it makes me very worried. I’ll spend some time looking into it.

Relative NAND endurance between drives. eMLC means enterprise MLC. Again, the question is how much writing you’d be doing to the drives and how well the controllers would distribute the load across the entire unified set of SSDs.

Since the parity drives would have the heaviest write workloads, it might be possible to use high-end drives *just* for parity — but then you’d probably have to make sure all the drives had the same controller, even if they used different NAND.

Ivor O’Connor

Thanks. That graph is now burned into my mind.

I usually spread parity evenly across the drives when I’ve used RAID in the past.

So I wrote back asking how to avoid premature SSD burn out due to their RAID implementation not handling TRIM. (You’d think they would proactively offer this information.) I’d imagine they would want to copy over a hot spare and then take the original off-line. Then do a full ATA SE as described here http://www.champsbi.com/secure-erase-solid-state-drive-ssd-important-information/. Once done then move it back in. I don’t know if this can be done with their CLI interface via a cron job or not. Or if they can do this safely without lowering their fault tolerance.

Joel Hruska

Not TRIMing a drive doesn’t cause burnout, it causes garbage data to not be cleared properly leading to degraded performance. Desktop RAID controllers only recently (within the past 18 months or so) began supporting TRIM in RAID 0, and I have no idea if the big players do or not.

The question of NAND degradation has more to do with whether or not there are firmware-level optimizations that make a drive more robust or more likely to fail in a complex RAID configuration like RAID 60. I want to make it clear — when I say that there could be a difference between how enterprise and consumer-level drives handle complex RAID arrays, I *am* speculating.

Your second paragraph though, about firmware-level optimizations so the free space on the disk is used evenly, is equally important.

For some reason I assume the second point is handled just fine because the RAID system lets the SSD controller do its thing unhampered. Yet the first point requires the OS to send a special signal to an actual SSD drive the OS has no knowledge of…

Joel Hruska

I can’t find any information on the question of whether TRIM helps reliability or not. In theory it could be at least a little helpful.

I mean, as with all things, it *will* depend on how many read/write loads you do. My primary concern is with the parity drives — since the parity drives keep a map of all the other data written to the array, I expect they’d be written to more aggressively. But if you intended to move a static set of data to the array and then just wanted lightning access, it still might not matter.

Ivor O’Connor

Me too. I can’t find information on the question of TRIM…

Hardware RAID support for SSDs is, euphemistically, a new industry reminiscent of the wild west.

After reading the LTS manual and checking LTS configuration prices I’ve decided Adaptec is better. Better because there is no price creep. When you buy the LSI9361-8i for close to $600 you must then buy an expander card for $300. Then the software to control this card has upgrade options too. So with LSI you get questionable performance increase with no end to additional purchases. On the other hand the price you pay for an Adaptec 72405 card is the complete price.

Ivor O’Connor

Got a reply back from Adaptec. They rely on the SSD’s garbage collection to keep the drive clean since there is no TRIM support. Newer drives should have better garbage collection alleviating the need of TRIM support. I suppose there are some utilities to test this? Perhaps this needs to be a metric included in the review notes on all new SSDs? TRIM has been replaced by garbage collection.

Matt Menezes

I’m just brainstorming here, but I think this would work…

If you want TRIM, you could hook up a bunch of SSDs to a controller, use AHCI not RAID/IDE, and go with software-based RAID at the OS level or something. That should still allow you to TRIM drives and get some of the speedups from striping, etc.

If you have a decent CPU, the overhead from running software RAID shouldn’t matter – then again if you have like 10+ drives, maybe it will start to build up… :-P

Ivor O’Connor

That is sort of what I had to do on my last build. I used windows 7 with 6 SSDs in a RAID0 config just as you described. This maxed out my PCI bus and since I do backups I’m not worried of failures. Then with lots of disk space due to the 6 drives I put in vmware and installed mint and manjaro. I normally live in the mint world but manjaro is quite nice too. I had to toss the extra GPU card and third monitor though because support for the very latest hardware is always very poor.

Joel Hruska

I’m linking you to PCIe NAND flash drives, and you’re responding with links to RAID controllers. It makes me wonder if you understand the difference.

Here’s a review of the card that actually includes IOMeter test results (which are incidentally fantastic and break 6.6GB/s)

Nonetheless, they hooked up 24 SSDs in RAID 0 and managed to sustain 6.6GB/s of bandwidth. Just a few problems with that:

1). The 24 SSDs they tested are $800 EACH. So for the low-low price of $19,200 (plus the controller card), you can buy this performance.

2). If every SSD has a 1% chance of failure, 24 SSDs in a RAID 0 has a 22% chance of a drive failure (99% x 24).

Again — buying this level of performance is simply and utterly impractical.

Joel Hruska

Sorry, I missed this.

The problem isn’t that MOTHERBOARDS don’t support PCIe x8. It’s that the consumer-facing SSD cards don’t use more than PCIe 2.0 x4. These offer about 1.6GB/s of theoretical bandwidth and top out at about 1GB/s in real-world tests:

The PCI-E cards starting at $1000 and below are all PCI-Express 2.0, with x4 or x2 links. No one has launched a PCIe 3.0 x8 card for consumers, even if we take $1000 to be a “consumer” price.

Joel Hruska

JD Rahman is right, but the other problem (I assume) is the number of channels you have to stack to support this much NAND in a single interface. I assume these drives are slow partly because they’ve got to be stacking controllers or using slower paths to ensure all the NAND has equal bandwidth to the single controller. The 800GB drives are far, far faster than this.

Realistically, two PCIe cards in RAID 0 would deliver much better performance than 32 drives in RAID 0. SATA would bottleneck and you’d be choked off.

Ivor O’Connor

To max out disk throughput you’d need several, three or four, PCI-Express 2+ raid cards, put together to form one volume? It could not be done with a single PCI-Express card?

Joel Hruska

All of the PCIe cards I’ve seen have relied on PCIe 2.0 x4 at most. That’s 400MB per channel = 1.6GB/s of bandwidth.

Most of the consumer controllers aren’t quite that fast, so you need a RAID 0 to hit it. The enterprise / industrial chips might be faster, but I’m assuming you want to talk about consumer hardware not enterprise solutions at $10K each. ;)

vers

Uhh LSI 9271? 9361? Pcie x8 is the standard on most enterprise grade controllers and has lately been shifting to pcie 3.0. These are the same controllers that OEMs just solder onto their motherboards with the same x8 capability.

As for the drives, I’m very excited. My companies VM Cluster runs on top of 96 1TB Samsung 840 EVO drives in a ceph cluster, I would love if these came down to $1000 and I could have even more raw storage. All of our stuff that needs dedicated iops is on it’s own gluster + zfs cluster with 40gbps to it, but all of our bulk raw storage is on much slower 4TB SAS drives with LSI nytro cards, and even then the nytro cards can only do so much to cover up the slowness of traditional spinning media.

Joel Hruska

Again, which markets? I don’t see any PCIe 3.0 in the consumer market. All of the drives that are remotely consumer-facing are still PCIe 2.0 x4.

virginia662

My Uncle Aiden got an almost new cream Lincoln
MKS Sedan by working part time off of a laptop. have a peek here C­a­s­h­D­u­t­i­e­s­.­ℂ­o­m

1-800-Hisoka

Size doesn’t deal with speed. only if you use them in RAID configuration. which splits data to two ways. for each drive. one drive has only one way. which the transfer speeds maxes at the speed of the SATA controller. even if the SATA was faster. the drive would not get faster unless they build SSD drives that their normal transfer speeds are faster. along with the newer SATA. 4 next i guess. which if they followed their 2*Previous gen speed. it will be at 12 Gb/s. which is 1.5 GBs

What about 12 Gbps SAS? Surely these SSD’s will perform better with 12 Gbps SAS instead of the slow and old 6 Gbps SAS

SuperTech

The question should not be about size. The question should be about price . . .
Will a comparable SSD of equal size ever be cheaper than it’s hard drive rival? And if so, how many years from now will it take? 10 years? 20 years? Will we even see it in our life time?

$650 for 1.2TB is absolutely unreasonable.

Joel Hruska

SSDs are not expected to ever surpass HDDs in cost-per-GB. The question is when consumers stop caring.

For example:

If a 1TB HDD is $40 and a 1TB SSD is $120, and the 1TB SSD is 5-10x faster than the HDD, at what point do you *not* care that the HDD is $40? Does cloud storage impact this equation? If cloud storage was fast enough and connections reliable enough, maybe everybody is satisfied with streaming data + 256GB super-fast SSDs?

I’m not saying that’ll happen, but it’s part of the big question.

massau

cloud storage makes me scared where is my data or who can see it IDK no connection is no data. but then again if you are only putting pictures , movies on the cloud drive and i could watch it on all my devices then it is not so scary because it is non critical. so i guess a 1TB SDD would be enough especially if games get 40GB in size (without pirate class audio compression)

Joel Hruska

That’s an enterprise drive on a SAS interface. WD will sell you 1.5TB of 2.5-inch storage for $150. If you want a consumer 3.5-inch drive, you can buy 4TB for $150.

That’s not very honest of the author: a “server class” HD is not really doing anything that a regular hard drive array could do. While the SLC SSD is definitely doing something that a regular hard drive array could not do

Joel Hruska

Except,

1). That’s an MLC drive, not an SLC drive.
2). Seagate sets its own prices, not me.
3). The point of the price comparison is to illustrate that if an 800GB SAS SSD retails for $6600, a 4TB SAS SSD ain’t gonna be cheap.

https://plus.google.com/+Securitycamera-ny/about camera-ny

SSD should be instead HDD on the near feature

Abdulhadi Alajmi

I Hope we can see big hard drive 60+TB

I hate when company send press to the media about breakthrough there R&D team Made

Without giving dead time and the price and speed the new tech can achieve

Here some of them :

Seagate with HAMR UP TO 60TB Hard drive

HP with Memristor UP TO 100TB Hard drive

IBM with Atomic-scale magnetic memory UP TO 400TB Hard drive

Sony Tape have 185TB

5D Disc have 360TB/disc and have unlimited life time of storage

RRAM ,3D nand ,Racetrack Memory Will help to make large and faster hard drive

I Hope i didnt forget any new tech! if you know one not listed above pls let us Know

I will be happy to see the new tech release to the market

Let see if sandisk can to reduce the cost of SSD and double the capacity every year or two !!

Joel Hruska

None of those technologies are near-term practical.

Joel Hruska

None of those technologies are near-term practical.

BtotheT

You seem to be on the feed like I am, they also spoke of a theoretic Petabyte Disc worked on by Swinburne Univ. Anything 4-6TB+ for me is more than enough. Give me a 14-10nm processor, a 4tb Nand, and a solid state/porous/next gen battery in a phone and I will give you a trash can of labtops and desktops.

Stephen Dail

Perhaps with that much storage, spell check will work.
collosal/colossal…

Bulat Ziganshin

500 GB ssd drives are already starts at $220, even with MLC (not TLC)

Matt Menezes

Yeah, I saw that, too. I got my 256GB Crucial MX100 for $100 when it released, and the512GB model was $200. I’d suspect these will drop to $150 in a couple of years…

Matt Menezes

I think TLC will definitely make it down there. Crucials MX100 512GB drives were selling for $200 when they launched. I think they might be 5-10% more now, but still, Samsung sometimes has their 256GB EVO’s going for $120. I’d say it’s much more likely 512GB TLC drives will drop down to $150 over the next couple years…

James Mauro

First off, yes SSD will eventually overpass rotational drives completely. The question is when. Second off the comment of comparing seagate 10k 1.2gb rotational drive with there own 800gb SSD is totally flawed first off during that same time period samsung was offering a 1tb sad with about 520mbs transfer speeds which surpassed seagate significantly let alone the enormous speed difference between sata3 and above SSD and any rotational drive using any form of connection. the 1 tb samsung evo840 version cost less than 500$ through newegg.com during the time the article was written. It would be very obvious that you do not know what you are talking about with that one piece of information. But wait theres more seagate as a company has been purchasing rotational drive plants names and companies of the past decade. However there drive quality and longevity has declide so much so that no consumer in there correct minds will purchase them. I personally about 6-5 years back had 3 sony laptops each of which where installed with seagate drives all of the drives died within 6 months and the replacement drives sent by sony (also seagate) died again 6 months later. Now onto other company names that actually use seagate products or are owned by seagate. Lacie was once a great company that offered great external storage solutions they wouldn’t put seagate drives in them for lets face it they do not preform as well, they have short life spans. However seagate bought the company a few years back and now they put seagate drives in them and they break all the time. Further more not only will ssd surpass rotational drives in transfer speeds but also lowering cost. There is litterly no moving parts there has to be a spindle that takes up space and a movable arm inside the device to read the data along with controllers and so forth the cost of materials and so forth increase the price of rotation drive sets. now chips in sad today are 128gb or smaller in capacity and they together as a group make up the total drive capacity but each chip only cost the company 28 cents or less. Even with a tremendous profit and “other cost” increasing it to 2$ its still cheaper. Yes SSD is new and companies currently try to charge a premium for something that is very inexpensive to manufacture, but it is cheaper to manufacture. Secoundly the only reason why seagate charges what they do for rotational drives is price fixing they are currently trying to buy out the compitions while still producing bad products all of which will lead there company left behind it truly is a bad vestment.

Hello, I would like to invite you to know the newest site for buying and selling the internet .. YoshiSell (Because we love Yoshi) – on our site advertising and the sale are free. Come now, sell u products to make money big.

Use of this site is governed by our Terms of Use and Privacy Policy. Copyright 1996-2015 Ziff Davis, LLC.PCMag Digital Group All Rights Reserved. ExtremeTech is a registered trademark of Ziff Davis, LLC. Reproduction in whole or in part in any form or medium without express written permission of Ziff Davis, LLC. is prohibited.