Posted
by
CmdrTacoon Monday November 24, 2008 @06:31PM
from the i'm-sure-this-will-be-a-bargain dept.

Lucas123 writes "Samsung said it's now mass producing a 256GB solid state disk that it says has sequential read/write rates of 220MB/sec and 200/MBsec, respectively. Samsung said it focused on narrowing the disparity of read/write rates on its SSD drive with this model by interleaving NAND flash chips using eight channels, the same way Intel boosts its X25 SSD. The drive doubles the performance of Samsung's previous 64GB and 128GB SSDs. 'The 256GB SSD launches applications 10 times faster than the fastest 7200rpm notebook HDD,' Samsung said in a statement."

Yeah, it'll be 1 to 3 organs, depending on demand. Demand for soft organs is red-hot right now, so it's pretty well sure to be a triple-donation. Probably the usual combo: eyeball, kidney, testicle. If they want bone, they'll take an arm or a leg, but we haven't done a limb-cut in days.

Good god, man, only suckers with corporate accounts have to shop at CDW. $1,900 for an MLC drive? Try $300 for the same speed specs [newegg.com]. If you want SLC you'll have to double that, but still, CDW is way overpriced these days.

Actually, spinning media will continue to be used in servers that need huge capacities of storage. But for cheaper devices, the speed, energy efficiency, durability, and price of solid state drives will effectively make using spinning media obsolete in the next few years.

To the cousin poster - would you propose that we do monthly-rotating daily backups of our small business server to spinning hard drives? Keep in mind that they need to be easily archivable and the cost of the media is more important than cost of the hardware.

Are you doing off site? How many gigs are you backing up? You might find that a raid NAS array, possibly off site to be cheaper than you think - and fully capable of keeping all your backups. Or price a dozen or so external HDs.

And hard drives will rule the world for a while when it comes to on-line, random access, but not requiring especially low latency.

Still, I find it interesting. Right now, on Newegg, the largest HD you can get is 1.5TB for $140. For $278 you can get a 128GB SSD. Call it $2/GB.

It wasn't that long ago that the HD was ~$300, and the SSD, $3k for a 80GB one. Matter of fact, Newegg still lists a 64GB model for $825. Back in 2004, a 250GB HD cost $250 [alts.net]. Anyways - we're looking now at SSDs being available that are 'only' 1/12 the size of the largest consumer HD available at this time, for about double the price. Go back around a year, and you're looking at 10X the price for 1/12th the capacity. We went from a 120X disadvantage to a 24X advantage. That's a massive catchup, relatively speaking.

Keeping with Newegg - you can get a 120GB 2.5" drive for ~$50. So it's 5X as expensive to get the SSD - but the SSD is shockproof in comparison, and is demonstratively faster. Go large on the HD? 500GB for $110. Around a 12X disadvantage. At this rate I'll predict that SSDs will replace hard drives in laptops around 2010-2012. Leaning towards 2010. Shortly after that it'll take the server market, at least for systems that lean towards reads. 2016 or so for standard desktops.

SSDs are still over 25 times as expensive for 1 TB of storage. Fixed that for you.

64 GB SSD today = $150.80 GB hard disk = $40.If you need only 64 GB of storage, as most handhelds, laptops, and desktops do, SSDs are only about four times more expensive today. You can expect SSDs to become cheaper than hard disks in about two years, at least for the smaller capacity drives.

Well at 80GB hard drive prices don't scale as well because hardly anyone makes 80GB hard drives anymore. Your comparison is a bit unfairly slanted, because while 64GB might have been fine 3 or 4 years ago, these days Windows takes up 11GB by itself, and in the age of HD movie downloading and double-digit GB games 64GB doesn't quite cut it. However, if you're going for more of an ultraportable type device you can probably get by with 64GB easily I suppose, and add on with an external HDD.

One thing to remember is people like my parents and grandparents. I use over a TB of HD space. My parents haven't even used over 50 yet, and my grandparents even less.

Due to the mechanical components of hard drives, they aren't going to get much cheaper, even/especially in bulk - some of the bottom end we see are the manufacturers putting their old drives on fire sale.

Once the manufacturers can get a 40-80GB HD for LESS than they can get a hd*, I predict they'll start switching. They're already on the poor side for GB per $.

The cheapest laptop HD on newegg is $50 for 80GB(.625). Lots of 320GB($70,.22), 500GB($110,.22). The cheapest 3.5" HD is $36 for 80GB(.45). $42 for 160GB(.26). A 1TB one runs $95(.095), so per GB it's a much better deal, delivering 4 times the GB per dollar over the 'cheap' 80GB. The cheapest SSD is $20 for 4GB(5). Not very efficient, not even very fast. Going up - $145 for 64GB(2.27). $278 for 128GB(2.17) Not much economy of scale gained, but that's to be expected. 32GB seems to be where the SSDs start flattening out($84,32GB,2.63) at the moment.

Going by this - I figure 3 years before you start seriously seeing SSDs replacing hard drives in laptops - and it'll start on the low ends for cost savings, and the high end for performance.

It'll be another 5-10 before they start doing the same to desktops. Still, I figure upgrading will become common again - fast flash for OS and programs, cheap big HD for most multimedia.

*As long as it can be expected to last long enough that they don't have to do warranty work, and performs at least as well.

Hmm, nice calculations. I agree it's going to ake a while for flash to overtake SSD, and hopefully eventually we'll get dual/hybrid drives with maybe 32GB or 64GB of fast flash for OS and key programs and maybe 1TB for media and mass storage, which would be awesome. That is something I would love to have in a laptop, and hopefully in most cases the HDD wouldn't even need to spin up unless you're playing a movie or something, and you can even have it use the flash as a giant cache and the OS could copy the m

I agree that SSDs will find a niche soon, and that niche will continue to grow. My point is simply that spinning drives are nowhere near being a "dead" technology as the original post stated.

You can expect SSDs to become cheaper than hard disks in about two years, at least for the smaller capacity drives.

Is this just speculation? In any case, it seems likely. Every hard drive requires a calibrated motor and many other specialized parts. The cheapest hard drive can only be so cheap, and so at smaller sizes,

Samsung would not release the price of the drive because, it said, the product is targeted at the reseller market and those vendors would have to determine the mark up for the drive in their desktops and laptops.

Damn -- How can I bitch about how expensive it is when they won't even tell me!

Putting a 128 GB Samsung SSD into a Lenovo laptop (instead of the stock 80 GB 5400 RPM hard disk drive) costs $499 today. You could expect to pay at least $1000 to get a 256 GB Samsung SSD right away. Of course, the price will drop to reasonable levels in a year or two.

So it launches applications 10 times faster [sic] (should say in 1/10 the amount of time), but the article only claims speed improvements of about 3.5 to 1. People need to seriously examine how they quote or accept statistics.

Jim Elliott, vice president of memory marketing at Samsung, said the new 256GB drive can store 25 high-definition movies taking up 10GB of space each in just 21 minutes, which he said is a significant advancement over a 7200rpm hard disk drive, which takes about 70 minutes.

So it launches applications 10 times faster [sic] (should say in 1/10 the amount of time), but the article only claims speed improvements of about 3.5 to 1. People need to seriously examine how they quote or accept statistics.

Jim Elliott, vice president of memory marketing at Samsung, said the new 256GB drive can store 25 high-definition movies taking up 10GB of space each in just 21 minutes, which he said is a significant advancement over a 7200rpm hard disk drive, which takes about 70 minutes.

Ah yes, but you don't have the seek times of the 7200rpm drive which are at best ~7ms. And since opening an application involves opening lots of different files (in different physical locations on the drive), this is where launching an app can be 10x faster.

So for straight writing a single, large, contiguous piece of data, it's only 3.5 times faster. For loading 200 random, tiny files, it's ten times faster.

This is somethat that a lot of people tend to overlook, either because they don't understand how a hard drive works, or because they don't stop and think about it. Loading programmes, especially ones which rely on libraries, translation files, multimedia, etc... at other locations on a disk would greatly slow down a HDD in comparison to an SSD.

Contrasted with SSDs, which are pretty much random access devices, in order to read each of those files from an HDD, there are basically 3 time factors to consider.

1. Seek time. The time it takes to move a reader head to a specific track (ring of data on a platter). Assuming that there is only this read taking place, you can pretty much assume that the reader head moves from its current location to the correct spot on the disk right away. Things are not always this pretty, though.

2. Rotation time. On average, you will have to wait half a rotation for the correct spot on the disk to spin around to the reader heads. There may be algorithms designed to mitigate this by reading even as it waits. In case the read is large enough to span a significant portion of the track, it can append that buffered data later, but I don't know if this is done or not.

3. Read time. This is the amount of time required to read the data off of a single track, and can take up to 1 rotation of the platter to complete.

So while the GP has a point in that people need to be careful about what kinds of statistics they believe, he/she glosses over the fact that reading a single piece of data with an HDD is hardly a random access, constant time operation (or linear time for n pieces of data).

Disk I/O is the one area I still have an easy time slamming modern computers on. Most others, it isn't too expensive for me to simply get enough power that handles what I want in realtime without slowdown. Multiple VM, no problem quad cores are cheap. Big audio projects? Hell I can get 4GB of RAM for less than a month's Internet access... However when those projects start hitting the disk, I start having problems, even with a RAID array. The sequential stuff isn't it, it's the random access that kills it.

Audio only takes 172Kbytes per second per track (for 32-bit floating point). So you figure that doing something with, say, 64 tracks isn't a big deal right? Only about 11Mbytes/sec, way under what a single disk can take. However you can find that it'll choke. Reason being is that the audio isn't all nice and sequential. It's written to disk as 32 separate stereo audio files. Also you maybe have some of them reading, some of them writing and so on. The disk gets overloaded trying to seek to the information in time.

VMs are the same thing. Two VMs running computations at the same time on a system works at full speed. They each use a core of the CPU, there's no problem. The do contend for memory bandwidth, but that is plenty high enough. Likewise one VM doing disk access happens at near native speeds. There's not a lot of overhead to read and write to the disk. However get two VMs doing disk access, man things grind to a halt. Your drive is dancing all over trying to service the simultaneous requests from different areas so throughput grinds to a halt.

An SSD would just be amazing for apps like this. Not because it has so much more bandwidth, but because it's bandwidth stays much higher under intense random access. Where a harddrive might obtain 50MB/sec in sequential read, the same drive might struggle to pull even 5MB/sec in random reads. For the SSD it might be more along the lines of 200MB/sec for sequential and 180MB/sec for random. Even though it isn't full speed, it's close enough as no odds. With that, the VM and audio work would have no throughput problems.

I've found that for VMs it's best to short stroke the drive. Partition it so that your VMs are in the middle of the drive, all together, in a somewhat narrow section of the disk. That way, even while doing high IO/sec, you're at most half a disk seek from anything else on the disk. Also, always pre-allocate your VM disks; the performance difference is huge. If you're running a *nix distro, it pays to put your swap on one side of the VM partition and/var on the other, this way you shouldn't have to stray out of the center of the disk too much. The anticipatory IO scheduler in Linux helps a good deal here, you'll get much more throughput for a sub millisecond latency cost. With NCQ/NTQ, and only using 1/3 of the disk surface, you can feel the difference... especially as compared to a single partition that's running low on space and doing a full stroke to get from the VM to your OS.

Both claims could be correct. It is entirely possible that by having a really low seek time and high read speed the drive could launch programs, specifically larger applications that involve many smaller files and plug-ins, an order of magnitude faster than current drives. At the same time, it could have a write speed that is only a couple times faster than normal drives. Personally, I take all of this with a grain of salt until independent benchmarks come out but the claims themselves are not impossible or

That the job time differs by a factor of 3.5 does not mean that data transfer speeds aren't improved tenfold. There are other factors involved, you know. It'd have been a cleaner comparison if they had transferred a single 250GB file from one HDD to another HDD, then a copy of that same file from one SSD to another SSD.

All the same, once capacities reach 750gb or better and the price point is below $200 or so, I'll be buying them. Hell, I'd probably consider buying a 256GB drive just to improve boot times. (when Linux decides it's time to fsck boot times are slow)

Question: That they could transfer 10 25 GB files to the SSD leads me to think it's 256 gigabytes rather than gibibytes? Are these SSDs rated using actual gigabytes, or gibibytes with the gigabyte label? I think SSD technology is a great breaking point where manufacturers could/should agree to abandon the misleading gibibyte ratings.

On an unrelated note: Maybe a spyware-infested Windows box will boot in under two minutes now;)

I am thrilled, as a home user I think 250 gigs is the sweet spot for my laptop. While I certainly could fill more than that, I think that mark represents a reasonable amount of space for the average mobile user looking to ditch the problems associated with a spinning platter. I also expect the price to fall quickly, making these drive much more affordable in the near future. SSD is finally getting close to the masses.

The ComputerWorld article says "and are available for resellers today". The Samsung press release [businesswire.com] says, "announced today that it has begun mass producing". I couldn't find them in any of the usual places.

The Samsung website is particularly un-useful and hard to navigate, though I suppose it's appropriate that they require you to use Flash for this one.

I wonder how many of today's/.'ers remember doing this. To the best of my hazy recollection, I never had a "single sided" disk fail to format both sides.

When I first heard about it, I used a second disc to mark the location and an X-acto knife to cut the slot. I recall it being several months before tools to cut the slots started showing up in computer stores.

I also recall discussions about whether spinning the disk "backwards" would dislodge dirt trapped in the liner and cause premature failure of the disk. In hindsight it sure didn't seem to.

I do. I never had a single sided disk fail to format, though a couple of times I accidentally hit the media while punching the notch. I always just used a regular paper puncher to punch my notches, so it wasn't quite as precise as one of the square punching tools would have been.

That didn't last all that long though. I went from cassettes in a TI 99/4A in 1984 to 5.25" diskettes for the Apple ][ machines at school from 1986-1988. By the time I got to college the 3.5" disks were starting to come out and th

I do. I also remember telling my friends that it couldn't possibly work. I'm not sure if I was dumbly missing out on getting extra storage on the cheap or if I was being smart by realizing the value of my data (which I still have to a large extent). But I was in middle school at the time, so I'm opting for 'dumb'.

I know the impetus is to produce big and fast SS drives, but I'm more interested in cheap and fast ones. My desktop machine has 11 Gb of system and apps and <1 Gb of user files. I would be perfectly happy with a 16 Gb SSD that had great performance, was cheap, and was reliable. Reliability is a big issue. Although theoretically a device with no moving parts should always be more reliable than one with moving parts, in reality SSD technology isn't as mature as HD technology, so the failure rate may actually be higher [slashdot.org], and there may be no way to recover from a failure.

You can get 16GB SDHC cards for about 30 USD. Those are class 6, which means you get anything from about 8MB/s to 20MB/s depending on the brand. Of course, if you want more speed, you can always use RAID0.

In fact, given how cheap they are, a RAID5 system would probably make sense. You get a speed boost, and the ability to hot swap a single card if it goes bad. ZFS would also work really well, but I don't know if you'd get a speed boost that way. Also, all these approaches would allow you to very easily extend your system by buying another card (and reader) and adding it to the pool. (You may want to check up on whether you can remove it again later, though.)

Hmm. Thanks for prompting me to go and look at this stuff. I might actually do this for my next lightweight server.

Computer math doesn't work like regular math, like for example SATA2 which is 3Gbps. Now if I showed you a cargo ship with a capacity of 3000 tons, you'd think you could actually load 3000 tons right? And not that 600 tons of the cargo hold would have to be fixed support beams. But with computers it's somehow okay that 600Mbps is just parity bits and that you can't actually transfer more than 2400Mbps of data. And computers have been fucked with 1000/1024 at least as far back as the 1.44MB = 1.44*1000*1024 floppy which can't be right in any system and probably longer. Ignore it, honestly whoever started this has wasted more time for computer users than whoever dropped the century digits leading to the y2k problem.

Computer math doesn't work like regular math, like for example SATA2 which is 3Gbps. Now if I showed you a cargo ship with a capacity of 3000 tons, you'd think you could actually load 3000 tons right?

No, you wouldn't necessarily expect to be able to load 3000 tons. Firstly, what type of tonnage are you talking about? In shipping, there are several different types of tonnage, or in other words, different values for the same thing, with at best slightly different names. For example, Gross Register Tonnage, Net Tonnage, Gross Tonnage, Thames tonnage, Panama Canal tonnage, Net Register Tonnage, and who knows what else.

Secondly, suppose a ship has a "capacity of 3000 tons". Could you fit more pillows or gold bars into the ship? Which one will fill the hold first? Can you fit 3000 tons of pillows into a ship with a capacity of 3000 tons? Can you fit 3000 tons of helium in? 3000 tons of depleted uranium? What if the ship is to be sailed from a salt water port into a freshwater lake? Does that affect anything?

Why would you pick tonnage of shipping as an example of regular math? Shipping measurements are all over the place. For example, how long is a ship? Well it depends on the shape of the ship, and where you measure it. Length at the waterline or length overall? With the ship loaded or empty? Heeling or sitting level? Salt water or fresh?

Just about everything to do with computers is simpler and more regular than just about anything related to boating.

Giga is an SI prefix. It is defined as 10^9 and abbreviated as a capital G. So to say you have 200G of something implies you have 200,000,000,000 of them.

Computers do it wrong. When computers say Giga they mean 2^30, not 10^9. That's wrong, for that you use IEC prefix of gibi, abbreviated as Gi.

The reason is that back in the day, computers had little memory. Thousands of bytes was all. So when talking about thousands of bytes, programmers started calling them "kilobytes". After all, it is close. 2^10 is close to 10^3, only 2.4% error. Well memory kept growing, and the incorrect prefix usage kept going on and they kept using bigger ones.

However this has two problems:

1) The error grows. At the giga level it is about 7% off. The large you are talking about, the more the difference between the base 10 prefix and it's "closest" base-2 amount.

2) You get confusion between levels. For example suppose your computer shows you something in megabytes. It says you have a file that is 2000 megabytes. Well that's 2 gigabytes right? Wrong, 2 gigabytes is 2048 megabytes. So it is rather unintuitive to humans. We work in base 10, the numbers displayed are base 10, but the prefixes are used wrong.

Really, the harddrive makers are right. Computers should display amounts according to the base 10 prefixes. Computers have no problems with base conversions, they should be doing that for people.

Sugar-coat it however you want, but hard disk manufacturers did it for selfish marketing reasons only. An ST-225 20 megabyte drive is about 21,000,000 bytes. A 360k floppy disk is 362,496 bytes formatted. 256MB of RAM is 262,144 kbytes.

It was only when somebody couldn't quite make a 1GB hard disk that 1,000,000,000 bytes became "good enough."

The IEC prefixes were created in 1999 to deal with the discrepancy introduced by hard drive manufacturers. (kilo|giga|mega)bytes had standard definitions in powers of two decades before the hard drive manufacturers started ignoring them, and yet an additional decade before the IEC prefixes were created. Worse, hard drive manufacturers used the correct definitions right up until 1995, when it was switched so they could advertise their drives as having a 1GB capacity.

The missing space could be caused by several different things depending on your take:

. NSA backdoor

I don't think you have to worry about that. With the wear-leveling alone, you already cannot be sure whether or not something was truly erased: your rewrites may not have gone to the same parts of memory where the file was stored in the first place.

Perfect for systems that need to be written to once, then read lots, available with minimal delay (no spin-up) and maximum reliability. ie pr0nz server. Immense sales for this market sector alone should bring prices down.

BLOCKSIZE on a typical NAND device can be 2k, or 4k, or some other significantly smaller value than 128k

The caveat is that it is necessary to erase the whole block to write a byte when a bit needs to be changed from its erase state. e.g.:

If the erase state is 0 and a bit needs to be cleared (it holds a 1 in or scenario, and we want it to be zero) then it is necessary to erase the whole block. This (obviously) means copying the block contents to RAM, zeroing the FLASH page in which the byte resides, and then writing the page back to FLASH. It sounds worse than it is, and ultimately the overhead doesn't put a dent in the difference between using spinning media and FLASH. For example, what is the overhead to change a single byte on a hard disk?

Spin up the platter if needed

Seek to the filesystem metadata and find out where the file resides.

Read in that data, then use it to determine where on the disk it is

Seek to the location where the file resides

Read the sector that needs to change

Change the data in RAM

Write the sector to the platter

Update the metadata (itemization of steps not included; you get the idea.)

SSDs do not allow you to directly read/write/erase flash memory. The firmware includes a flash translation layer that lets the host read/write 512 byte sectors just like any other drive. Sectors do *not* have a fixed location on the disk. Writing a sector simply appends it to the current erase block, and updates the translation table (also an append). When it runs out of blank blocks, it picks one to erase based on its wear leveling algorithm and garbage collection, and copies any live sectors to a fresh erase block. Just like a HDD, there are plenty of spare erase blocks, which are needed for the copying garbage collection and for when erase blocks go bad.

While the basic function of FTL is open, the wear leveling and garbage collection algorithms are fiercely proprietary. (The best ones actually count how many times a block has been erased and keep the counts even - and do this at high data rates.) This is OK for now because there is also fierce competition, and the code runs only in firmware on the device - not on the host. (Same as the controller code on a HDD.) Should the SSD market ever shake out into a monopoly, the basic FTL ideas are available.

Me neither. We spent weeks (which translates to tens of thousands of dollars) benchmarking and optimizing a database app. The thought of accelerating it by a factor of 5-10x with a simple hardware upgrade is stunning.

I could be wrong, but you sound like you're being sarcastic, which is a pretty stupid attitude to have here.

Let's say you have a crappy unoptimized database. You can spend tens of thousands of dollars' worth of programmer time to fix it up and optimize it so that it runs fast on your current hardware. Or you can spend perhaps one tenth of the money to upgrade to a super-fast disk, achieving the same end result. Which one is the smarter move?

Wonder how many hours this drive would last if used for swap or a database container until the flash cells wear out and start returning errors.

10k*256GB / 200MB/s write speed = 151 days at full write 24/7. And you'll probably get some nice warnings without data loss since the typical failure mode is that they can be read but no longer written. Of course if you're using swap even nearly that much, you're doing it wrong. I'd be very surprised if my swap use exceeded 10GB/day, in which case it'll take me some 700 years to hit the write limit. And if you're running a heavy database there are drives for you, just not this one. So who do you work for? Western Digital? I think they're the only ones that haven't realized the boat is leaving and they're lost in the mountains.

And of course every time someone levels this criticism it completely ignores the complimentary question: how many hours will a mechanical spinning-platter HD last doing all out full-speed sustained reads and writes 24/7?

At least flash drives have a predictable failure timeline, whereas HDs simply have a vague MTBF and could easily fail much sooner (or much later!) than that.

Actually there is a row access time which can be quite high, as high as.5ms for MLC which compares with 2.5ms for 15k rpm drives. Add to that the relativly low IOPS for MLC (less than 100 according to this [tomshardware.com] review using the database server IOMeter profile which is 70/30 read write if I remember correctly) and for a server load they lose bigtime to drives considering they get worse performance, have significantly lower MTBF, and have way less GB/$. SLC is a bit harder to quantify as the best units have MUCH

I haven't read tfa (of course), but the summary only talks about sequential read/write speeds and says nothing about the latency, which is the more important thing about RAM.

To illustrate the point: an oil tanker full of dvds can probably transport hundreds of gigabytes / second, but that does not mean that you can access random data quickly that way. Just that you can (on average!) transport much data per unit of time