Whether this is true is up to debate but, in a piece for ZDnet, Harris presents a forward-thinking argument for why SSD drives are becoming a thing of the past. With new, "non-volatile memory technology [like] ... flash today, plus RRAM tomorrow, [...] it is time to build systems that use flash directly instead of through ... antique storage stacks."

Harris explains that SSDs "were built because there [were and] are billions of SATA and SAS disk ports available." With the ports already there, companies saw a profit to be made from making use of them.

But now, even as SSDs are starting to become the norm, Harris believes that it's too little, too late. In particular, "SSDs rely on a Flash Translation Layer 9FTL0 that makes flash ... look like a disk drive." There's latency, there's wasted CPU cycles, and there's extra complexity. For Harris, the only way forward is to innovate. If we could get "rid of FTL, SSDs would be faster, lower cost, and more reliable." And Harris' last point: SSDs are architecturally obsolete.

Based on the love and appreciation for SSD drives in this community, what do you think?

I would buy a computer with built-in flash storage if it were available, upgradable, removable. Any machine that is locked down as to memory and storage is obsolete within 3 years. I would love to see DAS flash drives that don't use a SCSI, SAS, or SATA bus. Check out the latest 128 Gb flash drives that are the same size as the 32 Mb drives of the not-so-distant past. But isn't that the way our smart phones and tablets work now? Without the upgrade and removable features.

What about M.2, I thought this was the next step, putting the SSD on a couple PCI-E lanes. About to drop a new Samsung M.2 chip in my Z97-P motherboard, should see an improvement over my Samsung EVO dinosaur.

Abstraction allows for vendors to compete using different methodologies and technologies while maintaining system compatibility by conforming to interface and communication standards. Removing that layer of abstraction may increase speed at the cost of flexibility.

There’s lots of excitement around the future of memristor, MRAM, and RRAM technologies. The same layer of indirection that Robin Harris rails against in the article is what will enable storage based on newer technologies to be used with legacy systems. I agree interfaces and software stacks should be examined as potential bottlenecks, but that doesn’t necessarily mean abstraction should be removed.

This is an age old problem in technology. Writing directly to hardware is fast, but limits development to OS or application developers. Hardware vendors don’t necessarily want to develop commodity hardware, nor be dependent on OS or application developers for hardware support. Users just want it to work, and like the flexibility to choose solutions that best fit their needs. With all these competing objectives abstraction enables all parties to use the best tool for the job that meets standards. Focus on improving standards (interface, communication, etc.) instead of removing abstraction.

It's a bit like saying we should be thinking about electric cars. Tthere are just a few obstacles such as battery technology to solve first. Until Microsoft and/or Linux and file system programers architect this into the OS you'll still have some software layers there.

But, I think that the "obsolete" word is a little over-used in this case﻿, as I think you all sense. "Obsolete" implies that there is new technology available to actually replace the old technology. In this case, there are certainly new concepts out there that are sure to gain wide acceptance... but not yet.

In the near future, you'll see the progression from SAS/SATA to PCIe to some other advanced architecture, but no, that shiny new SSD you just bought is not "obsolete" today. But hang tight! The future is coming... fast!

We are definitely throttling the performance of SSDs by using them in legacy SATA/SAS. As another astute member says once we break away from HDD connectivity for SSD they will take over the world.

My vision for arrays of the future is PCBs of flash inserted in to a bus based chassis (think Blades), spinning disk and its current sizes mean RAID will become so inefficient during rebuilds - Erasure coding is the way forward for spinning disk and could be its saviour in my opinion.

One SSD manufacturer has announced 2016 will see 16TB SSD drives which destroys predicted spinning disk capacities for 2016 - who would to sit there watching their array rebuild RAID5 with a 16TB spinning disk - way too risky.

There are already and they have been here for over 15 years bus based architectures out there (Compact PCI for starters) which essentially offer pin and socket variations of the PCI bus. An array built around this architecture essentially lets us harness the performance of PCI rather than SAS/SATA.

Cost does play an important factor, however with larger capacities, faster access times, greater IOPs, less power and cooling I would still happily pay a premium for a better product, people need to stop comparing a storage device simply based on capacity being better.

0

This discussion has been inactive for over a year.

You may get a better answer to your question by starting a new discussion.