Solid State Drives (SSDs) are quick, really quick. What’s better is that they aren’t necessarily a break-the-bank decision. Sure, there are some larger disks that will open some eyes on new system provisioning (especially for notebooks and desktops), but some low-end products can be purchased now at less than USD $100! That’s incredible! Sure, they are not the largest capacity solid state devices; but they will tremendously increase the experience of the client with a subtle difference in cost from normal rotational technologies.

If I am making a recommendation to shop by price, which in some cases is the right thing to do, then reliability of these bottom-end parts come in to play. That is where the cloud comes in. Whether it is private cloud (a.k.a. the internal IT datacenter) or public cloud solutions, we can manage client data to mitigate risk of device failure. Surely, we’re already doing something to that effect for client protection, aren’t we?

The other side of making a blanket, widespread decision to adopt a technology like SSD for client computing is this is another example where the empowered IT user has more capability than the IT offering. This argument is most frequently used with “I have more storage at home” or “I have more Internet bandwidth at home.” By giving users super-fast SSD storage, we may enable the argument, “My laptop is faster than those expensive servers.” Unless we are equipping modern SAN infrastructure with flash or solid state storage technology, we may be heading for an insatiable user base.

Storage decisions will always be at the center of our IT prowess, and SSDs versus hybrid disk drives are just another example at this point in time. What do you think about SSDs and hybrid drives? Share your comments below.

About Rick Vanover

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

Full Bio

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

I would agree with Rick that hybrid drives are a fad, and I'm sure these drives will disappear once SSD technologies become cheaper.
I'm a little surprised by some of the comments regarding the reliability of SSD's. Why so paranoid about SSD critical failure? Storage media fails. It's a horrible fact of life, but that's what backups were invented for. On a practical level, the improvement in usability I've seen from using an SSD in my laptop and a number of other desktops I've bought over the past year far, far outweighs any purported 'life span' issues that may exist. On a theoretical level, SSD's generaly have MTBF figures in the millions of hours, far greater then HDD's. If I've still got my laptop in three or four years time I'll be very surprised. I'll have bought new by then, so if my new SSD lasts that long it has served it's purpose. As an asside, having dealt with a great many drive failures in my career, a large number of failures of HDD storage had nothing to do with disks wearing out. They've more often then not taken a knock and suffered a catastrophic failure as a result of a head-crash. SSD's are not indestructable, but they're undoubtedly more physicaly robust then HDD's.
I wouldn't even consider using hybrids, but the only reason I'm not using SSD's in a corporate environment, outside of the occasionally high-power workstation I buy, is purely down to cost. As soon as I can start getting good quality, high-capacity SSD's at a cost that's comparable with HDD's I'll be using them. And I'll continue to do what I currently do with my other storage array's; I'll keep a spare disk. If a drive fails after 3 years, I'm not that bothered.

Hybrid hard drives set to heat up the market--for Vista
http://www.techrepublic.com/article/hybrid-hard-drives-set-to-heat-up-the-market-for-vista/6129456
The interim jump to hybrid drives from physical media really didn't make sense or catch on all that much. SSD's were coming down the pipe very quickly, getting faster and cheaper.
I still use some physical hard drives especially for the media storage that I have of 3.5TB, but I'm slowly converting as many systems as I can from HDs to SSD. They make acceptable backups, but otherwise HD's are on their way out. Another five years and the market for physical HD's will be so small since SSD sizes will continue to grow and get cheaper.

My supervisor was just telling me about an article he read online that detailed a long-term test of 10 different SSD drives from a variety of top manufacturers and how they ALL suffered complete failure within 2 years of being installed. The article went on to say that the current technology in flash memory doesn't appear to stand up to the continuous I/O activity that would be encountered in the average server. I asked, but he could not remember exactly where he saw the article.
However, after doing a bit of Googling, I did find several reviews/articles/reports that seem to indicate that SSD's, at the least, are not any more reliable than traditional hard drives. In fact, some numbers seem to show that SSD's actually have a marginally *higher* failure rate than HDD's. Personally, wether I had found those articles or not, I'm not convinced that SSD technology is advanced enough to justify the extremely high dollar/GB ratio.

Intel IT uses solid-state drives in all newly purchased laptops and retrofitted PCs. I know that once I got mine, I noticed a big improvement in speed and reliability which made me quite happy. We recently did a study that looked at 45,000 SSDs in our environment and found an 87% reduction in the annualized failure rate, as compared to hard disk drives. You can read more in our paper here http://intel.ly/qumkS2
Janet G, IT@IntelSME

Ideas on which users for SSD
Any RAID-10 or RAID-1(mirroring) drives
Servers:
Data Base - yes - but with multiple drives for redundancy. Index files first followed by temporary sort space and then data.
OS drive - yes
.EXE, .DLL and XML files - yes
backup drives - eh?
Web servers? - possibly.
Power users - GIS, Database, Application developers, scanning stations for LaserFiche, tech writers - oddly yes.
Home - good to have for us techies but those that only use Yahoo and email? Maybe not. Hard drives last longer.
Thin clients who only use web browser ?
Shouild I defrag SSD ? I think so. Keeps down on "split I/o" Keeps NTFS from thinking the file is fragmented on multiple clusters so it issues just one I/O request.
What do you thnk?
RAID-5 SSD?

I purchased a 500 GB hybrid drive for my laptop for $89. A 240 GB SSD is still around $400. When an SSD costs more than a decent laptop, there will still be a market for lower cost, although slightly slower, traditional or hybrid hard drives. My hybrid drive was only about $15 more than a traditional hard drive of the same size. That extra $15 is well worth the price. I cannot yet justify the $400 price tag to get an SSD. If it was $200, then my hybrid drive may get squeezed out.
A hybrid drive is a compromise and when the cost difference gets compressed between the traditional hard drives and the SSDs more people will move to SSDs. For now, that time has not yet come.

Windows did its usual bloated updates. I went from less than 11 second cold boot to 30 seconds in 8 months. Note: Have a Crucial 120GB flash and an Hitachi 4/500 flash/rotational combo drive.
That said...opening apps is still blazingly instantaneous, with the exception of VMware which wants to stall when opened...after it wins its resource fight, the VMs load up just fine and work almost as fast as the host.

If we have to deal with limited write-life hardware, then OSes will have to be organized so that the unchanging parts can be distilled to the SSD, while write-intensive parts can be kept on traditional platters.
Isn't it like this already?

I have a Doctor who wants a faster computer well a NB that has a i7 CPU 8 GIG of RAM and blows most systems into the weeds. He's complaining that it's too slow and the possibility of fitting more than 1 [i]internal[/i] HDD doesn't exist.
The Sub $100.00 guess for 2.5 inch drives isn't really on as I've just looked at the price of these things a A Ram 40 GIG Drive is $99.00 AU + 10% Tax at my buying [i]trade[/i] it's not my first option something like the 60 GIG Corsair is $105.00 + which is more like it but even still it's only 60 GIG hardly the ideal option for a 1 Disc system.
A Corsair 115 GIG is $178.20 + and may be just about big enough but if you really want to get realistic a 300 GIG Intel SSD is $2178.00 + which I personally can not see many takers and even then I would be removing a 600 GIG Mechanical Drive which isn't exactly slow to begin with.
OK so maybe you could go with a Domestic Class 300 GIG SSD from Intel which is only $628.90 + which is still more than the cost of the RAM that I fitted to this NB and while 2 X 4 GIG Corsair SO RAM may not be the cheapest available they are very reliable and for only a few hundred $ more you can have a Domestic Class SSD with a unknown Reliability Rate.
While the price may be good I personally would be thinking that the Enterprise Class Drive @ over 2K would have to have a better reliability rate, but frankly the SSD just are not there yet. They are cheaper than what they used to be and are getting down in price but they still need to come a long way to compare favorably to Mechanical Drives. :0
While I may consider one of these drives for a Desktop and load the OS to it where size isn't important and I can move the Data and Swap File to Mechanical Drives where there will not be an issue of bad sectors forming it's still going to slow the thing down quite a bit. OK so with 8 + GIG of RAM hopefully the Swap File will not be getting used that much but with Windows Systems I simply do not trust M$ not to Bloat the system out to requiring the Swap File heavily within a few years which will be an anathema to all SSD's.
OK so the Intel Drives claim 1.2 Million Hours between Failures for the cheap one the Corsair Drives come with a 3 year warranty and the A Ram Drives come with a 2 year warranty on paper they sound good but who could trust them when you have a complaining demanding client who you just know is going to end up loosing all of their data.? :D
Col

It will be much harder to predict the death of a SSD. It would depend on the usage by the user.
So far anyways, mechanical drives tend to last a number of years (often 5 or more years), despite how light or hard they are used (with exceptions of course)

But that doesn't mean SSDs can replace HDDs.
Will SSDs have a (growing) place in computing at all scales? Sure, that seems likely.
Will HDDs ever be phased out in favor of SSDs? That seems unlikely (the technology to come after SSDs might be a different matter).
Maybe we'll someday get a file system which can allocate data to storage according to use patterns, so that things that run well on HDDs go there (long continuous-read files), whereas smaller, more erratically read files go to SSD.
And why not have swapping and hiberfiles and pagefiles etc. go to easily replaceable storage (a securely recyclable storage, made to be fast, external and inexpensive).
Of course, before that happens we could have quantum-mechanical storage, making the whole thing moot.

You might be referring to this blog post:
http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html
I know we humans re prone to overvalue anecdotal evidence because we can relate to stories more than statistics but no long term statistical analysis I've ever seen has demonstrated this has any correlation to the representative SSD experience.
I should also point out that the author of the post says, "Solid state hard drives are so freaking amazing performance wise, and the experience you will have with them is so transformative, that I don't even care if they fail every 12 months on average! I can't imagine using a computer without a SSD any more; it'd be like going back to dial-up internet or 13" CRTs or single button mice. Over my dead body, man!"

After actually looking at the material more closely, I have to call the results umm... difficult to apply. Basically, it says that SSDs that were deployed for less than 6 months were much more reliable than mechanical drives that were deployed for greater than 6 months. That's barely relevant. I guess it's good that you've ruled out that new SSDs are less reliable than hold mechanical drives and that's a good first step.
For this to have any relevance it needs to compare 6 months of SSD deployment to 6 months of mechanical deployment-- perhaps those new desktops that are being retrofitted with SSDs have reliability information available?

Thanks for the link!
Of course, this is deployed in a mobile fleet where the nature of SSDs should lead to significantly higher reliability just as a side effect of mechanical damage to conventional drives.
Do you have any data on the failure rate compared to drives which aren't moved/jostled/jarred on a regular basis?

Files on SSDs are intentionally fragmented.
Think about it. If you have 64 memory chips in the device, would be faster to read/write simultaneously to 64 chips or to 1?
As for application.. the only area where SSDs aren't massively faster than mechanical drive arrays is in sustained sequential read/writes. The primary advantage of SSDs in scaled arrays is in the latency, not the sustained speeds.
EDIT: Changed "mechanical drives" to "mechanical drive arrays." Added some more stuff.

Funny that you guys are complaining about drives that cost less than $200. My first 100 GB HDD from WD cost $270. Get real, the price of SSD storage is dropping like a rock in a well. In six months they'll be darn near giving them away.

And you can easily make one from a discarded desktop, as long as it has 3 half-height 5 1/4-inch slots accessible from the front. A Hard drive cage with built-in SATA backplane will cost a little over 100$US depending where you buy. But there are also 4 and 3-disk cages which cost less. The 4-disk cage still requires 3 5 1/4 slots, but the 3-disk cage will fit into 2 such slots.
Use Linux for the server OS because you can customize it to your friend's needs.
When he gets home, all he needs to do is plug his NB into the home network via the cable and let the server handle things. If he ever loses the NB or it is stolen, all he's lost is the day's work; all the rest is on his server, properly encrypted so he alone can have access. And you can even set it up so his server calls a more secure server in a place where it won't get stolen or burned on a regular basis for even more security. And remember the files are still encrypted, his backups are done automatically, as long as he remembers to plug into his home network.
The total cost of materials, if you can pick up a functioning desktop is only the drives, which you can add one at a time, (2TB

There's no need to look at enterprise SSDs for this application. Those drives are provisioned at sizes MUCH smaller than the actually memory so that there is "spare" memory to be used as the memory fails over time. Those MTBT numbers are also geared for their intended application; So on Enterprise SSDs that's 24/7 use delivering the kind of IOPs normally delivered by a 4-10 drive SAS array.
As far as price ranges, 240GB is about the size limit for high-end consumer grade SSDs (which are more than robust enough for very high stress workstation usage). With some shopping you can get 240GB for under $300. 160GB is available in high-end SSDs for under $200.
Keep in mind of course that all those drives have significant space already provisioned for write-balancing and failure (a 240GB drive is really a 256GB drive).
I have no idea why people are worried about the swap file being overused. That file is being rewritten to different chips all the time and isn't (and doesn't need to be) stored some sort of contiguous file block.
EDIT: Also, because of the nature of SSDs, if you intend to use full disk encryption on an SSD, consider anything on the drive prior to encryption vulnerable even if it was "secure erased" as those technologies are designed to operate with mechanical drives. There's an IBM white paper on the subject somewhere. Anyway, just encrypt it before you put any PHI on it.

In this case, the doctor can complain all he wants. If he wants the extra speed, he has to PAY for it. He's bought a Ferrari, and is complaining it won't go 300mph. Well, dump a bunch more money into it, and it will! BTW, what would he think about the Raptor X? Half the space he's got now, but it'd have the extra speed...

"So far anyways, mechanical drives tend to last a number of years (often 5 or more years), despite how light or hard they are used (with exceptions of course) "
My son uses a pair of SATA2 RAID0ed SSDs as "working drives" for his video rendering (it's over three times faster than mechanical drives) and in about three years of very heavy use hasn't seen a malfunction.
Not to say he never will, but I think modern SSDs are being unfairly stereotyped. They're really pretty good products (and they make my 900 mhz celeron powered EEEPC almost bearable :)

and I found that if I set my power requirements too much toward the saving side, my drives keep shutting off. I'm assuming todays applications must use RAM more than the need to write any permanent data to the hard drive. So my drives keep falling asleep because of no write activity. I'm not being very scientific, but let's just say, my emotional side says that SSDs wouldn't wear out any time sooner than good RAM would. I haven't had to replace RAM for failure in anyone's computer for some time. They usually buy a new one, before that happens.

Sorry all, didn???t mean to come across as SPAM. Figured including a link would be the easiest way to share info on this topic.
GreatZen:
To answer your first question, we are 80% mobile so our SSD deployment has been focused on notebooks. We don???t have reliability data for desktops.
Re: your second question, you are correct that it is not a perfect comparison, but it is the best data that we have to date. We are using this first-year data to help validate our prior assumptions and guide our decision making. As we get better data with time, we will continue to refine our analysis. To restate the numbers in the paper, we ended our first year of deployment with 45k SSDs which had been deployed between 1 and 52 weeks, with an average of 24.4 weeks; the HDD ARR/AFR was based on 2007 data of 80k HDDs which had been deployed between 0 and 4 years.

Is it borderline? Yes, because Intel is recommending drives that Intel makes. Is it spam? No, mainly because 1) it is on-topic and 2) the link leads to a white paper rather than a sales site - still borderline, but more indirect than spam.

For more-or-less static data. For Windows, I'd move nearly everything but the OS to another drive.
I still don't understand why these drives are referred to as "solid-state". I don't know of any drives that used plasma (vacuum tubes/valves) for anything.

The mobo has been drinking
my harddrive is asleep
and the combo went back to New York
the router has to take a leak
and the carpet needs a haircut
and the cubicle looks like a prison break
cause the printer's out of cigarettes
and the keyboard's on the make
and the mobo has been drinking
the mobo has been drinking...
and the menus are all freezing
and the sysop's blind in one eye
and he can't see out of the other
and the HR-consult's got a hearing aid
and he showed up with his mother
and the mobo has been drinking
the mobo has been drinking
cause the CEO's a Sumo wrestler
cream puff casper milk toast
and the V.P. is a mental midget
with the I.Q. of a fencepost
cause the mobo has been drinking
the mobo has been drinking...
and you can't find your manager
with a Geiger counter
And she hates you and your coworkers
and you just can't get reimbursed
without her
and the box-office is drooling
and the spool server's on fire
and the manuals were fooling
and the sys-trays have retired
the mobo has been drinking
the mobo has been drinking
The mobo has been drinking
not me, not me, not me, not me, not me
http://www.youtube.com/watch?v=BPPtrqvHGEg

That's what hybrid drives are for. 4 GB for the OS and most used programs, and the rest goes onto the magnetic platters.
The only company making them that I have seen is Seagate and their biggest drive is a 500GB/4GB 2.5-inch laptop drive, though there is nothing preventing one from using it in a desktop.

You mean to say you entrust your (very personal) data to a hard drive you know not where nor who has custody of it?
You are leading an interesting life in the Chinese curse sense of it.
I would entrust my personal data to no one, and especially NOT the gov't other than what they absolutely MUST have since it is their data after all. Like SIN's and other such stuff. More than that, I only trust myself.
So you can understand, after all the data leaks which have happened all over the world, why I don't trust anyone with my personal data. And when it comes to my company's data, I am three times as paranoid.

I've tried to get a straight answer on this several times. Everyone goes on about moving the OS to an SSD, but I thought the swapfile would be a problem as it can be written a LOT of times.
Though, with 8 GB, I find my PC is no longer touching it.

Oh, right. They mean a swivel display that folds over.
It would be mightily confusing if solid-state physics adopted this strange use of the term. Or if "convertible" automobiles twisted in half and folded themselves up.

I think solid-state has been casually extended to mean "no moving parts." The concept being that the design is a fixed (solid) configuration and not plastic or dymanic in configuration.
As far as etymology, I think it was actually intended to contrast these types of devices using non-volatile memory from other RAM drive devices/software that used volatile memory. I guess "solid-state" was meant to equal "persistent-state."
As a person who has been annoyed for years about "convertible laptops/tablets" or laplets referred to as tablets, I understand your hesitation to use solid-state.