Posted
by
Unknown Lameron Thursday January 10, 2013 @01:34PM
from the please-send-three dept.

crookedvulture writes "SSD prices are falling as drive makers start using next-generation NAND built on smaller fabrication processes. Micron and Crucial have announced a new M500 drive that's particularly aggressive on that front, promising 960GB for just $600, or about $0.63 per gigabyte. SSDs in the terabyte range currently cost $1,000 and up, so the new model represents substantial savings; you can thank the move to 20-nm MLC NAND for the price reduction. Although the 960GB version will be limited to a 2.5" form factor, there will be mSATA and NGFF-based variants with 120-480GB of storage. The M500 is rated for peak read and write speeds of 500 and 400MB/s, respectively, and it can crunch 80k random 4KB IOps. Crucial covers the drive with a three-year warranty and rates it for 72TB of total bytes written. Expect the M500 to be available this quarter as both a standalone drive and inside pre-built systems."

These drives typically are used for the OS and whatever apps you want the fastest performance. Fast boot times, quick load times, quick action-times within the application, etc.

But even with 500GB, some people have so many apps and games that 500 is pushing it... so they have to decide which application do they want fast performance and which can they just throw on their large HDD drive.

Some people I know don't want one because they can't fit their 3TB movie collection on them. That's not what they're really for at the moment since the sizes aren't that high. And besides, the average person doesn't really need the performance of a SSD just to watch a movie. To edit/scratch/whatever perhaps, but not to watch movies or listen to mp3s. A slower HDD is fine for that.

And besides, the average person doesn't really need the performance of a SSD just to watch a movie. To edit/scratch/whatever perhaps, but not to watch movies or listen to mp3s. A slower HDD is fine for that.

Um, the average person doesn't need SSD performance just to watch a movie??? I don't think anyone needs SSD speeds to watch a movie. Even uncompressed 1080p24 video at 16:9 aspect ratio is only 149.3 MB/s. And nobody watches a movie as uncompressed video, unless you're in the editing room, in which case

The average person just watches a movie: a 5,600 RPM drive is more than fine.

The Beyond-average users might watch a movie that they're ALSO going to be editing... in which case I imagine a higher random-seek-time might be nicer if they're cutting/pasting/adding-effects/etc. Home movies, stuff for professional work, etc.

Spot on. What most people don't get is that regular harddrives provide good streaming throughput on large unfragmented files. You want movies and other large, most read files on spinning disk because it's cheap. Where SSD rules is random IO and IOPS. Your small file read and writes, logging, and database just excel on SSD. People bitching that the SSD isn't big enough for their media would be the same person that would bitch that their race car doesn't hold as much as a semi (lorry).

256 was enough for me: the OS, Office, a couple of AAA games, and my development stuff.

After a while, I needed to install some other stuff too... apps that I needed to run fast for developing. I got a little closer to the 256 than I'd feel comfortable so I upgraded the PC to 512GB SSD and moved the 256GB SSD to a cheap laptop.

256GB probably would've been fine in the long run, but I wanted a little wiggle room.

Meanwhile, I have about 1.5TB of iTunes videos and such on a second hard drive.

I have a 750 gig Seagate hybrid drive on my gaming computer. Only thing on it is the OS, games, and a few apps. No movies, no music, no "junk drawer". I'm currently using 562 gigs. That's with all but the most recent restore point deleted, and a recent disk cleanup. I don't even have productivity software installed.

So a 960 gig SSD is of interest to me. What would be of more interest is a 2tb or larger hybrid drive with a moderately sized SSD. Something like the 3tb fusion drive Apple has would be excellent. I've been quite happy with the performance of my hybrid drive and I'd rather pay $200 or so for a 2tb hybrid than $600 for a 960gb SSD.

A bunch of my friends have a large gaming library installed and like their SSD because it reduces load times and such.

However, said friends also do NOT delete the older games they no longer play... in case they want to return to it in x months/years. This was they don't have to go to the hastle of re-installing AND they get to keep their save-games + progress. As games are now measured in the multi-GB range, a 250GB drive fills up (relatively) quickly.

I've made friends with mklink in newer versions of windows... install to C:.. then when you aren't using it a lot move it to D:\!c_maps\Program...\game\ and link it back to C... My first SSD was 60GB and was not nearly enough... got used to it. About to drop in a 240GB drive as my main drive, my ESXi Server is using a pair of 480GB drives... works great.

It's sized for people who don't want to think about what to store on the drive and can pay 12x the price of a regular drive the do that, or for people who need to work on large files at high speed. In the same way the market for 64 Gig of ram machines is low but high value. People who want that performance will pay for it.

I agree with a couple of other people though, I think the way this makes the most sense to go is mainstream commercial drives with a fast SSD cache and slower magnetic tap drive, and the

I think it's plenty for any desktop - I use an SSD for both applications and data (productivity, web browsing and gaming), and average less than 1GB writes a day. Even if you're downloading a new game from Steam every day, say 10GB of writes a day, that's still 20 years of usage. If you're writing lots of tiny files and the disk is mostly full (pretty much the worst case scenario for SSDs) so the write amplification is, say, 5x, that's still 4 years of usage.

27TB seems odd. each cell can only be rewritten ~72 times (assuming good wear levelling)? surely you should be able to rewrite cells 1000s of times, and have some spare ones to replace any that fail early.

i'd say you would need a very *write heavy* workload to burn through this. I've shot a film on a DSLR and at most generated 20-30 GB in a day. So if i loaded that on to the drive each day, and then threw it away (or copied it somewhere else) at some point so i could keep filming more, that would be 10 year

I think there's definitely a market for these drives. Think of an HTPC for example. I leave mine on all of the time. I have terrabytes of video, BUT that video is only written once. It rarely ever gets anything but reads. Yes I could and do use an HDD for this, but they aren't silent or low power. This SSD could be just what the doctor ordered for this type of use.

Crucial wouldn't confirm the write-erase limit of the m4's flash chips, but it does publish endurance specifications for the drive as a whole. According to the company, the m4 can write 72 terabytes of data over its lifetime. Amortize that over a five-year span, and you're looking at 40GB per day.

Just noticed that Crucial made the same claim on their m4 drives...only 72TB seems like a lot more when you're dealing with a 128/256GB drive.

What are the maximum write cycles for todays SSDs? I'm sure they are similar.

Typical figures:SLC: 100,000MLC: 10,000TLC: 5,000

You get more storage for the price with MLC and TLC, which is why they're popular. But I'd much rather have a 128 GB SLC drive than a 1 TB MLC drive, for the same price.What's sad is that it's almost impossible to find SLC drives now, due to consumerism.

Of course SLC is still around, for people like you who fail to realize that a disk with almost 10 times the capacity but 1/10th the per block endurance is just as reliable if the bigger capacity isn't used, thanks to wear leveling, and probably much faster due to parallel access to more chips.

Of course SLC is still around, for people like you who fail to realize that a disk with almost 10 times the capacity but 1/10th the per block endurance is just as reliable if the bigger capacity isn't used, thanks to wear leveling

You, on the other hand, fail to realise that this is only true if every write is a rewrite, so wear levelling applies. It isn't, so it doesn't.

A proper wear leveling algorithm takes care of that by periodically remapping content that doesn't change often so that low-use blocks can be reclaimed into the free pool. So in effect, yes, it is true unless your wear leveling algorithm sucks.

What would be really nice is if you could buy a single drive and then decide to operate it in SLC or MLC mode depending on your particular needs. I wouldn't think it would be that difficult, or expensive, to modify the read/write circuitry to handle either mode, and it would even let the hardware degrade semi-gracefully: You could operate your snazzy new SSD in 1TB mode for a year or two until reliability became an issue, and then switch it to the far-more reliable 512GB mode and reformat to get another d

You can't operate in "SLC/MLC" mode as these are physically layout in chips how the electrical bits are stored. Is it store as a simple '0' or '1' or multiple VOLTAGE levels that are used to encode multiple bits. Each of these have implication on the physical shapes and the densities that can be achieved.

It is not a software compression option that you can just turn on or off.

I know that - but basically given one storage location you could store 3 bits (TLC) with 8 voltage levels, 2 bits(MLC) with 4 levels, or 1 bit (SLC) with two levels, which is done is a matter of the control circuitry. As flash cells degrade they're no longer capable maintaining voltage levels precisely enough to represent the desired number of bits. You could however "downgrade" your controller so that rather than trying to store two or three bits in each pit you only store one. The easiest way would lik

"What's sad is that it's almost impossible to find SLC drives now, due to consumerism."

For most people they don't need SLC. You only need SLC if you're writing a lot to a drive (things like enterprise apps). The thing killing SLC is cost and the fact that hard drives still have a huge lead because they are dirt cheap. Even a 512GB SSD right now is roughly $369 bucks, you can get 9TB of hard drive space (3x3tb drives) for that kind of money. So you can get roughly ~17x the space of a 512GB SSD for the sa

Even 5,000 rewrite cycles is a lot. It's been 72 days since I last rebooted my laptop, and in that time I've written 600GB to its 256GB SSD. I rebuild LLVM every few days, so I probably have a higher than average write rate. I also ripped a few DVDs to watch on the train when I visited my mother over Christmas, so that adds a little spike, but let's assume that's a pretty average burn rate. That's 0.0326 rewrite cycles per day, assuming perfect wear levelling. In the magic happy land of perfect wear l

I said in another post that I've written 600GB to my laptop's SSD in the last 72 days. At that rate, writing 72TB would take about 23 years. I do have a 23-year-old laptop in an attic somewhere, but its 60MB hard disk is long since dead.

It's easy to jump to knee-jerk conclusions about some arbitrary number which "feels small". Once you do the math, as the parent just did, it becomes clear that it is perfectly fine. In fact, writing 60 GB / day is for most home user a HUGE margin, so the drive should be able to last much longer than three years.

That's because the endurance rating isn't the most optimistic figure set by the manufacturer (unlike laptop battery life for example). Instead they follow standard usage patterns established by committee (such as JESD) so that they provide somewhat realistic figure.
So, if all you're doing with this SSD was continuously writing large files, then you may expect it to last 5PB as 5000 write cycles on the NAND allows.

How often do you really rewrite most of the contents of a drive though? With a 60 or 120 gig drive the vast majority of your space is going to be taken up by relatively static programs (OS and productivity programs themselves), give or take patch cycles that change some of the stuff. With a 960 gig drive now you're looking at a significant amount of user data, so that might not survive as long, but most drives can also detect a failed sector and turn it off so it might be that you start to see see the eff

First, proper wear leveling considers the entire disk (or at least a particular slice of it) as candidates for replacement, not just the block you're writing. So when you need to write to block 345, rather than rewriting cell 345 (which has been rewritten six times already), it might elect to read block 976 at cell 976 (which has been written only once), migrate that data to cell 345, and write block 345 into cell 976.

First, proper wear leveling considers the entire disk (or at least a particular slice of it) as candidates for replacement

yes, but not really relevant if you're only rarely ever writing anything to the disk at all. Yes, when you do write 1GB of changes it pushes data around to the most suitable sector for it, but you have to have data you are trying to write at all for that to be relevant.

Your capacity should never shrink. With an SSD, by the time you start getting write errors, it's time to copy all the data off the drive and scrap it.

All of my SSD's (and I've got 6 around here right now) started with about 0.1% of sectors not working, and have gradually crept up since then to somewhere around 0.4%. That seems to be pretty normal.

It was only a couple years ago that $500 was the entry price for a SSD of useful size and performance, and they were still quite popular in high-end gaming PCs, and I could well imagine high-end "cost is no object! Give me the biggest numbers!" laptops using them. Still probably more of an enterprise-oriented device, but lots of applications there that are fairly write-invariant but still benefit from SSDs incredible seek times - web servers for example.

Which it does on every boot. Doesn't need to be reinstalled. Most retail SSDs come with software which can do the transfer, otherwise you're choice of bootable linux USB sticks will do the magic (with a win 7 recovery disk to rewrite the MBR).

You are better off re-installing to a freshly created partition, not cloning an existing disc. There are disk-block alignment issues that may occur otherwise (where a logical block occupies parts of two memory blocks), increasing the wear rate, and thus lowering the life of the SSD. The performance won't noticeably suffer at first (It'll still be damn faster than disc) but it's better for the health of the drive in the long term.

The Kingston drives (and probably some others) come with Acronis TrueImage HD on a CD. Takes about 20 minutes to copy to the new drive by booting from the CD. Disconnect the old drive and reboot. Super easy. And this was a Windows 2008 R2 Server. When I recently upgraded from the 64GB Kingston to a 180 GB Intel, it took 5 minutes and extended the partition to 180 GB automatically.

I've been thinking about it...I wonder how I could transfer an existing Win7 install like that, in Linux it would just be a few lines in fstab...

Crucial sells a kit that lets you transfer the entire contents of a drive to a new one. Includes the hardware and software needed to hook up both drives. I did this with a Win7 laptop when I went to using a SSD. Worked great and did the whole job in about an hour.

I'm not worried about the actual data copying, I'd probably use Robocopy for that, I'm wondering how I'd change from a single-disk filesystem to one where certain apps are on a different disk. How can I delete the original directories and "mount" the ones on another disk while the computer's not running?

I'm wondering how I'd change from a single-disk filesystem to one where certain apps are on a different disk.

With Windows the easiest thing is unfortunately to reinstall the apps. You can move everything over if you keep it on one volume but if you split volumes the only option I'm aware of is to uninstall the apps and reinstall them in the new drive configuration. There may be a better way but I certainly have never seen it.

Actually I just got an idea. If the linking can be done from the CLI (and most things can be on Vista and later), it should be possible from the recovery console. Configure disks, copy data, shut down computer, delete and link moved directories. Looks like the mklink command is what I need.

Only if you buy a 1TB hard drive. If you buy a 3TB drive, it's about 12x the cost per GB.

Hard drives have a minimum price they can't go below, because of all the hardware required to get those disks spinning and read and write to them. Additional capacity doesn't add so much once you go above that level.

While it's nice to see SSD capacities increasing, the real metric is the cost per gigabyte, which is still nowhere near conventional harddrives. A good number of us have massive multimedia collections; It's still cost-prohibitive to store all of it on SSDs. And at least for the short-term, a primary drive over 200GB isn't really something most users need. A select few, perhaps, but not many. This may be something more useful in the enterprise, but then... looking at the specs, it seems it wouldn't survive v

No it decreases reliability by half. If any one of the drives fail, you cannot recover data off the other.

It's more than that. If p is the probability that one of the drives will fail in a given timespan, the chances of your array staying up is 1-(2*p + p^2) . The problem is that you need to consider the possibility of both drives failing so the probability of the array going down is p+p+p^2 so things are worse than just having two independent drives.

You also have to consider the likelihood of a drive failing at all (there's the possibility that you will cease needing them before they fail). Add to that the likelihood of failure during a given time frame - Drives done have an even percent failure rate every month of life.

Also, there's the consideration of failure raite in RAID-0 agains1) a single independant drive that does the work of those n drives in RAID-0and2) Haveing the same n drives do that work.

Finally, lifespan, which is what I was thinking of -If one drive has a life expectancy of an average of 5 years, given the failure distribution average over time, does a second drive in the array take it down to 2.5 years, 4, years, or something else?

However when it comes to a good backup plan, you need to make them lysine deficient so that if the velocoraptors aren't completely supplied with with the amino acid lysine by you they slip into a coma and die.

I think you would be more likely to want something like this for HD/4k video editing. Problem with that is that the drive can only be re-written 72 times before it dies according to the spec.

That's why a lot of these newer SSDs are so cheap. If you can only write 72TB to a 1TB drive that suggests that each cell probably only has a lifetime in the low thousands of rewrites (accounting for write amplification and re-writes due to partial block updates).

Hey, I can put in a second 1TB+ magnetic drive, no problem. I'm WAY more concerned about the number of writes that a flash chip in an SSD can perform before it fails (2000-9000 usually). My estimation says I'd likely completely destroy a 256GB medium quality SSD in about a year. That's a problem. The way I understand it, it's per-chip, not per bit, and there are only like 16 chips in an average SSD. It'd take me a while to write 256GB 9000 times but if a 100MB write hits 3 different chips, that's a pro

I thought that I read somewhere that if one block fails, the entire chip it's on fails as well but I don't know if that's true for SSDs. RAM won't do that but flash drives will do that so it's a tough call.

Oh, I'd have about 150GB sitting there static though from game installs and stuff. Vertex 4 drives will shift the data very rarely when one chip has an 8000 write count and the other has 1 write count from the initial image but that counts as another write. Or it will attempt to. That's over 50% capacity of the drive so there's not enough buffer room to move all of the data. So that results in more frequent shifts and more write operations as it does it fractionally over time. But moving 150GB of data

Assuming you don't have enough, buy some more RAM and make a ram disk. Point your browsers caches and any temp files to your ram disk, along with your pagefile. Have your netflix movies download to it, too (I will usually download all files to the RAM disk, but I know I never download more than 2 GB unattended, if I do need to download a huge file I can manually point it to my spinning rust). My 24 GB RAM system has an 8 GB RAM disk which typically runs about 1/2 full with occasional spikes up to the ful

If you used it in a situation where you maintain static data for reads, it may well last a long amount of time indeed. I agree that most databases could cause the lifetime of this to be less than you'd want, but if the database was mostly used for reads, and updates were relatively infrequent, which you might get in some archival systems, it would be very usable.

At home, I have a relatively small data set, and for things like movies, I've never seen the need to store them for long periods of time. I think

Just as an FYI, we've switched most of our databases to running RAID10 arrays of m4 512GB drives. We now use arrays of 8 drives, and sustain read rates of 1.7GB/sec. We switched over from fiber channel and SCSI arrays of 14 drives each w/ 73 or 146 GB drives in them and a top sustained read speed of 160MB/s. We will probably never go back to that. Whether it's because of the speed increases, the power usage, the noise, or shelf space required. We still use rotational media for slow access things like vi

Why do you want your media collection on an SSD? So long as the disk is fast enough to stream what you need it to stream you should be fine, at least until SSDs are cheaper than spinning disks. Personally, I have a (very) small SSD in my laptop and a spinning drive plugged into my wireless router that anyone on the network can use. Data goes on the NAS, software goes on the SSD. Even an 80gb SSD is large enough (though only just barely) as long as you don't play games like WoW with 16gb installs.

Depends what your time is worth. I consider my time to be worth something so every minute counts. SSDs help improve production significantly. I estimate my daily time saving around 20 minutes since I updated to an SSD. So say I cost the company $20 an hour, at the end of the year the company saves $4000 worth of my salary. Well worth the $150 to $400 investment

SSD are also a great solution for improving database speeds. Most data in a database doesn't change so the rewrite issue is not really an issue.

There isn't much point in putting video on a SSD anyway. Hard drives are more than capable of sequential streaming at typical video bitrates.

I see two main usecases for this.

1: laptop users who like to keep a lot of stuff on their laptop. This lets them have the speed benefits of SSD without sacrificing too much on the total storage or sacrificing the optical drive.2: gamers who like to keep a LOT of large games installed at once.

I really don't care about extra capacity for SSDs. I just set up a new laptop with a 256 GB SSD for the OS and 2 750 GBs in a RAID 1 for safe storage. So long as the SSD is big enough for the OS and a few apps installed for speed, I'm getting my money's worth. Now, if the SSD craps out fairly quickly warranty or not, then I have a problem.

I just put my old SSD in my old netbook and it really breathed life into that thing. It's actually usable now. Definitely makes a great upgrade for old laptops (where the spec'd drive is sometimes only 4200 rpm(!?!) and never more than 5400).

Anyways, maybe this year will be the year of SSD, just like the last 30 years.

Wasn't that a few years ago? I can't imagine using a machine with a non-SSD boot drive these days, and I can't understand anyone who knows anything about computers not having one. You don't need a big one typically, I recently got a really nice 160GB Intel for my mother in law's laptop for under $100 - made a 2008 vintage Vista machine feel brand new. Without a shadow of a doubt the most significant performance boost you can add t

Following the floods in Bangkok, HD production got going again, but the prices for these devices have not continued to fall as before. However, as this article points out, that is not the case with SSDs. The result? I suspect that sooner rather than later, significant numbers of people will start buying SSDs instead, realize the advantages over those old spinning platters of rust and then never look back. When the very few HD manufacturers that are now left finally realize the consequences of their greed an

That crossing point is still a LONG way off, you can get a 3TB drive for about $150, 5% of the cost/GB of this drive. In fact without something like the re-anneal in place tech talked about here on slashdot recently it'll never happen, this drive is already rated by the manufacture for 72 write cycles, what do you think the life will be after a few more process shrinks?

It seems to address an issue that just isn't there for most people, in my view. For everyday computing needs 960GB SSD is just way overkill. I'm not a gamer so in my case my needs are even less. I've got a 120GB SSD as my primary boot drive (OS and Apps) with the rest of the data on a separate conventional 500GB drive. On my MacBook Pro it's using about 40GB on the SSD. My Windows laptop is using about the same amount of space.

Now if you're a gamer or doing video production or CAD or running a database ser

For any useful application, I'll have to replace these like toner cartridges, probably even more often.

About every 3 years, if you write 65GB of data to them every single day. What the hell kind of application are you working with that requires you to write 65GB of data a day? I probably write closer to a couple of gigs a day, which means these will last for more like a hundred years.