Posted
by
Soulskillon Monday March 28, 2011 @01:42PM
from the more-cheaper-please dept.

Lucas123 writes "Intel today launched a line of consumer solid state drives that replaces the industry's best selling X25-M line. The new 320 series SSD doubles the top capacity over the X25-M drives to 600GB, doubles sequential write speeds, and drops the price as much as 30% or $100 on some models. Intel also revealed its consumer SSDs have been outselling its enterprise-class SSDs in data centers, so it plans to drop its series of single-level cell NAND flash SSDs and create a new series of SSDs based on multi-level cell NAND for servers and storage arrays. Unlike its last SSD launch, which saw Intel use Marvell's controller, the company said it stuck with its own processing technology with this series."

The 320 series isn't quite as impressive over the X25-M G2 series as I had originally hoped, so will likely be quite some time before I bother replacing the current one (and move that into the laptop instead).
Still, an update has been due for a long time now the X25-M G2 is ancient in SSD terms. Just hope the new controller is as reliable as the Intel one found in the old drives.

The Marvell controller is only being used in the higher-end 510 series SSD that was announced last month. That SSD is being aimed at gamers, workstations and such. This is being marketed to laptop and desktop users, even though it's winding up in data centers.

Seriously. Any sort of enterprise-level should be swearing off these things as a storage medium then. Well, maybe for a boot drive. But anything with massive amount of writes should be kept as far away from an MLC drive as possible.

Why? If the MLC cells are both fast and reliable, why does that matter? If I understand this correctly, MLCs would be the equivalent of clusters on an HDD. If any bit of that data within that cluster needs to be changed, its entire contents will be all read, and re-written back to another cluster. The same process occurs on an MLC.

Also, each cell write is much faster (because it can be "sloppier" with only two states per cell), which greatly affects random write speeds even if the speeds are the same for sequential writes.And random writes is often a bottleneck in master databases.

Also, each cell write is much faster (because it can be "sloppier" with only two states per cell), which greatly affects random write speeds even if the speeds are the same for sequential writes.And random writes is often a bottleneck in master databases.

I hear this come up every time even though existing SSDs, both MLC and SLC, already run circles around hard drives for both random read and random write performance.

I have an old SSD in my laptop that can outperform a very expensive SAN array for database workloads -- I've tested it with the same database and the same query side-by-side with the 16-core production database server with a 48-spindle LUN behind it, and my laptop won every time.

I hear this come up every time even though existing SSDs, both MLC and SLC, already run circles around hard drives for both random read and random write performance.

That's rather irrelevant when you compare two SSDs, though? Who said anything about HDDs?

(And besides, it's not true. The average speed is far higher for SSDs, but a short-stroked HDD has a much better worst case random write speed than any SDD currently on the market. Yes, really. And for some uses, that's what matters. But again, that's not relevant, because we're talking about SLC vs MLC drives here.)

One thing to consider is the price per capacity and how that affects performance. You can get Intel MLC based SSDs for about $2.15/GB and SLC based SSDs for about $11.70/GB. That can translate into 4x the number of drives at the same capacity, which is four times the controllers working together in a storage system. Then you can front end that with large write caches and you _MIGHT_ end up coming out ahead in performance.

Doubling lifespan that way requires that you only use half the disk capacity.

I have burned out a Major Name Brand SLC SSD with a high traffic OLTP DB in eight months. I have heard the same from Large Internet Companies which tested these for internal use. There are ongoing independent reliability expert studies in FAST, HOTDEP, other conferences which are uniformly highly skeptical of vendors' claims on SSD lifetime.

If you have not actually tested the drive out to six years service, run an accellerated pi

I have burned out a Major Name Brand SLC SSD with a high traffic OLTP DB in eight months.

Why tip-toe around this? Are you talking about Intel or not? If not, it's not really relevant here because this is about Intel, and I think most people agrees that Intel is generally a bit more respected for being a better tested product with a bit more truth behind their numbers. If you ARE talking about Intel, then I think that's pretty important to know.

georgewilliamherbert does not have any good points as far as I can see. His latter comments are offtopic, complaining about non-Intel flash-based drives in general, when this thread is about Intel changing from SLC to MLC. (Ignoring that, I find his lack of references telling.)

To address his former point, the Intel drives use static wear leveling (as far as I know) so even if the drive is full, the flash is fairly evenly worn. That means you can get more space for the same price which mitigates the shorter

Well, they DO mention that they tripled the sequential write speed, so it could be that the MLC is now competitive, speed wise, with SLC.
High-transaction databases are the devil's bane of storage devices as it is, you're probably best going with a high amount of RAM cache - both read and write, if that's what you're doing. Enough cache and the right database system and you can turn random writes into what are effectively sequential writes, improving performance that way.

I know that random writes is a problem - which is why I mentioned things like 'enough cache' and 'right database system' to turn those random writes into sequential ones. It's expensive in terms of storage capacity(your DB will have to be bigger), but if MLC is 'enough' cheaper than SLC, you just buy the additional storage. Plus, with modern MLC and wear-leveling you're looking at years at the drive's maximum write speed to start wearing out the cells. If MLC is around an order of magnitude cheaper than

Huh, with write amplification you can wear out an MLC drive in a matter of months at a small fraction of their write speed. This is why I dished out the money for FusionIO's SLC based cards, estimated life based on data from our existing SAN is ~5 years which means we should be for good our planned replacement time of 3.5-4 years (our current servers which are to be retired in a few days are 4.5 years old but have been in production use for just over 4).

35MB write speed, 40GB. 1,143 seconds to write completely. 19 minutes. Research gives a number of write cycles of ~10k. 132 days, about 4 months.
I'll note that this is pretty much writing 100% of the time, which should make up for 'write amplification'.

Get the 160GB model and you get 100MB/s over 160GB. 1,600 seconds for an overwrite.

I guess it depends on the duty cycle. A 32GB SLC runs about the same price as a 160GB MLC. Around 5 times the price per gig. If y

35MB write speed, 40GB. 1,143 seconds to write completely. 19 minutes. Research gives a number of write cycles of ~10k. 132 days, about 4 months. I'll note that this is pretty much writing 100% of the time, which should make up for 'write amplification'.

Except that that is only true if every write gets deleted so garbage collection can work at full efficiency. In reality, you fill large parts of the disk, and then you have a smaller empty area to work in. As it gets worn, the disk has to use wear leveling to use other already used sectors instead, and that means at least three writes for every block, which both slows the drive down as well as helps shorten the overall lifespan at the benefit of increasing the local block lifespan.For a consumer drive, it

Consumers are having to decide between Flash and a hard drive, where the hd runs about 1/20th the cost per gig than even a MLC SSD. Yes, it's tough.

As for garbage collection and deletions, did you miss my point about using some of the savings to buy a bigger drive? That's where I get the extra performance from. Not to mention that in the GP I essentially mentioned using a 'smart' database that knows what it's using to help reduce the number of write cycles and wear leveling needed.

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.

Maybe the reason users can't tell the difference between 5600 (5400??) RPM and 10,000 RPM is because for the most part what is slowing things down is the seek latency. In both those drives, they seek latency is going to be 12 ms and 7 ms respectively. Which you're right, the user probably won't notice. But a solid state drive will give you a seek time of about 0.1 ms which will make a huge difference in many situations. Most users will probably notice a change like this because seek time is probably what is slowing down the computer most of the time.

In my experience users can see the difference from just 4500 to 5400 rpm laptop drives. And that's not much of a jump at all.

From 5400 to 7200 RPM, a Windows PC is quite substantially more responsive. If you don't feel the difference with a 15k RPM enterprise drive your problem may simply be that the drive was already fast enough for the workload you're giving it.

Well and I'd argue that on modern Windows the extra expense really isn't worth it for a lot of regular users. RAM is cheap and Windows 7 Superfetch will quickly learn what programs you launch and when, and lets face it no SSD beats RAM.

I maxed my board out at 8GB and with Superfetch frankly everything I normally use launches as fast as I click it since with my predictable behavior Windows 7 simply loads it into RAM at the appropriate time. Considering maxing out most boards costs less than $100 and hybrid sleep makes shut downs kinda pointless unless you have a program that requires serious I/O for the average user there simply isn't a point in going SSD, not when 2TB drives can be had for $80.

Too bad SSDs didn't come out 10 years ago as it would have been most welcome when everyone was stuck on IDE with tiny caches and lousy memory management, for the "Average Joe" with plenty of RAM, big caches on the HDDs, and Superfetch preloading programs into RAM based on time and usage patterns? Kinda pointless IMHO especially at the prices per GB.

The only ones I've sold have been to my ePeen "Must have the highest benchmarks!" gamer customers and playing with their PCs other than bootup I really couldn't feel a difference. That is why I've been telling my regular customers and those wanting new builds to max out on RAM first and then if they still have money to blow after getting the rest of their wish list get an SSD for an OS drive, because frankly if their choice is RAM or SSD I'd always advise the most RAM as it'll get more use.

I've been preaching the max ram option to people who are planning on new systems since 2003 when I was able to see the difference it made using Gentoo Linux. The testing method was a bit simplistic but as it involved bootstrapping the system, the difference in time required with 512 compared to 1GB was impressive and convinced me at that time to install the most memory I could afford.

What I find funny now is people are spending their money on High Performance Gaming RAM when actual benchmarks show no improv

I know, I actually have some tight-timing gaming RAM in my PC that did come with heatsinks. I was talking about the UBER-L33T FATAL1TY DOMINATOR SKULLBRINGER RAMPANT ASSASSINATION MODULES with big colorful sci-fi looking heatsinks with fancy graphics on the sides that cost way more because of the e-peen points.

Good thing to know I'm not the only one preaching max RAM. I became a convert when 512Mb of PC100 fell into my lap like manna from heaven (a corporate client with more money than brains bought two 256Mb sticks, not knowing they won't fit in his machine, and rather than send them back just handed them to me, sweet!) at a time when 128Mb was the norm. The difference in Win2K Pro (old faithful. I miss old Win2K) was like night and day. That machine is still going BTW, upgraded it to WinXP 5 years ago and sold

I turn off my gaming rig when I'm not using it - it's a bit of a power hog even when idle. An SSD reduced the power-switch-to-usable delay to 1/3 of what it was with a fast HDD, especially reducing the time spent grinding from logon to all services started and actually ready.

Storage performance in general isn't very noticable on a running gaming system, as other delays tend to dominate (especially on DRM-infested games than=t need to phone home), but the boot up time reduction was worth it for me. I only

This is why I went with a RAID0 array of 10krpm drives for my gaming machine in early/mid 2009. I get most of the speed and way more capacity (600GB) for a much lower price.

Early last year I bought a laptop, which as usual came with a hard drive that was too slow, so I was going to get a replacement hard drive the same time. I figured I needed at least 64GB of storage. An SSD was still way too expensive so I went with a 160GB 7200RPM drive that only cost me $100 and doesn't make me wait too long for anythin

I used to have two regular HD's in RAID 0, now I have two SSD's in RAID 0. There is simply no comparison. SSD's absolutely blow away traditional Hard Drives, it's not about the Mb/sec... it's about the I/O's per second, and in this sense SSD's are about 70-100 times faster than traditional disks. Photoshop, Office, Firefox, everything opens instantly. I can even open 5 programs at once and they still all open instantly. This can all be done with 4GB of ram too, no need to buy more memory to make up for

Checking one of my favorite suppliers (there are probablly cheaper ones out there)

Ram is about £10 per gig and if you want to go beyond 16GB it's time to bend over and pay a lot more for your CPU/MB to get support for that extra ram.SSD is about £1.50 per gigHDD is about 10p per gig

Superfetch sounds great if you have a regular schedule every day switching between programs at known times, for those whose usage patterns aren't so consistent it doesn't seem so useful (still mostly on XP mysefl, won

It isn't just a regular schedule thing! Let me give an example, while I have probably 30 games installed on my game drive at any one time I'll usually only be focused on one or two at a time, the rest are deals like Steam's midweek madness where I couldn't beat the price and I'll get around to them when I can.

Windows 7 knows this about me and has the essential.DLLs loaded for the games I'm using, so that when I'm watching the developer screens (that sadly are more and more becoming unskippable ) it quickly

I already have 8GB on my home server and that makes very little difference from 4GB since it sits idle at ~4GB most of the time. But the SSD made a world of difference. A 2-3 minute boot became 25 seconds. A 1+ minute shutdown became about 5 seconds. I don't worry about reboots anymore, because it's around 30 seconds total (instead of 5 minutes)! Game cutscenes are almost instantly skippable (within 2-3 seconds), if they allow it. EVERY program loads instantly. Installs take mere seconds (even OpenOffice or Office 2007).

BTW, my RAM maxes out at 24 GB on this board, but if you told me the 24GB would help more than the 64GB SSD (about $90), you would be doing me a horrific disservice.

You're not reading the parent post correctly. He's suggesting that you don't power off the machine anymore. That's when you get big gains with Windows 7 and a big wad of RAM. Obviously, when you reboot, his solution doesn't work anymore.

SSDs affect other things besides just speed. I put one in my netbook and battery life went from six hours to eight - and it boots in fifteen seconds and starts programs almost instantly. The difference in power consumption matters less in a bigger laptop, but it would still help.
I also don't see why you're talking about an SSD and a 2TB drive as a binary choice. The "average user" doesn't need 2TB; they already have enough space with the ~500GB that came with their Dell. They could get an SSD, keep the hard drive they already have, get someone to move the Windows install, and have the best of both worlds.

Hybrid sleep is kinda like the way Macs do it in that it writes the RAM to HDD but unlike the Mac which only writes to the HDD if the battery is getting low (IIRC) the hybrid sleep on Win 7 dumps the contents of RAM to a file like hibernate while keeping just the used pages alive in RAM while powering everything else down.

What this does is it gives you the instant on of sleep while making power failures a non issue because you can literally yank the cord or pull the battery and it STILL gives you an advanta

Caching solutions are always poor. No system is smart enough to cache everything and there's a cost to caching - misses, reading the first time, etc that produce lag and the characteristic disk churn of mechanical drives.

I find in everyday usage, most users are disk bound. CPU and RAM are just sitting around waiting for the disk. I've only put in 3 SSDs and the difference is night and day. The low seek times and transfer speeds make the computer feel completely different. Once Joe Average gets to see one o

That is why I've been telling my regular customers and those wanting new builds to max out on RAM first and then if they still have money to blow after getting the rest of their wish list get an SSD for an OS drive, because frankly if their choice is RAM or SSD I'd always advise the most RAM as it'll get more use.
How does 24 GB of ram help if your system uses 2 GB of it? Windows is not going to cache all of your commonly used files on that RAM. Maybe it does if you use Superfetch. I've not tried it.
I woul

I have maximized my RAM for many years now. Since I had SSD the last few years, I stopped buying max # gigs for any new computer, probably about half of it and put it toward SSD.

It's much faster. The biggest improvement is waiting for disk to spin up to speed in a normal computer, in a laptop, this is multiplied by putting it to sleep much more often (close lid). Superfetch won't do anything there. Startups are MUCH faster too. (But then I don't start up as much as I used to.)

I'd say small high-speed drives are the 2-seater sports cars of storage. No major hauling capacity but you can still fit a decent bit of stuff inside and go plenty fast enough. SSDs are the sportbikes of storage. Costly, finicky and kind of unsafe but ZOMG SO MUCH SPEED!

And Intel SSDs are the Italian sportbikes of storage - more expensive than the competition because of the name:P

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

I am not going to run out and replace my minivan that I use to ferry my four kids and wife with a two seater sports car any time soon!

The analogy falls apart when your two seater sports car can make 80 round trips in the time the minivan makes one. Once you realize the true speed differences, you will start thinking of that two seater sports car as a minivan with seating capacity for 81, or one that can shuffle a "mere" four people around in seconds instead of minutes.

in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers.

Hmm... Hard to say, hard to say. Personally, I'm thinking more like $.10 per gig. As you mention, HDs are currently around $.05 per gig.
I bought a 60gig SSD a while back, it's just not big enough - it constantly forces me to shift stuff to the HD(I LOVE symbolic links!). I can keep the OS, a few applications, and maybe a couple games on it. Performance improvements, at this point, are almost unnoticable.
Personally, I think that a hybrid SSD/HD [storagemojo.com] solution is currently the best idea, at least for the c

I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.

The difference between a 10,000 RPM hard drive and and SSD is much bigger than the difference between a 5600 RPM HDD and a 10,000 RPM HDD. They will notice.

Like many others I went with an SSD for my boot drive when I built my last system (Crucial 64GB RealSSD), in combination with a cheap 7200 RPM HDD for data. The SSD makes a HUGE difference - no waiting for applications to start, very quick startup (the longest part by far is the various BIOS checks that it feels the need to go through), no need to defrag

Right. Focusing on read and write speed is misleading. The reason for this is that the perceived speed of SSDs comes from seek times, not R/W speed.

Think of it like this: ever play a game on a server in Korea with a one-second ping? Even if your connection is 100Mb/s, that feels horrible. This is analogous to a mechanical hard drive. Compare it to the LAN game where the server is 10ms away - even on a 10Mb/s pipe it's far better. That's what an SSD feels like.

With the availability of a relatively cheap 40 GB option I can see the start of widespread adoption in the corporate world. In my experience 40 GB is plenty for OS and applications for the vast majority of office drones (myself included), with pretty much all data staying on the server these days.

I see several adoption points - and the biggest one isn't performance related, but when the cheapest SSD that 'works' is cheaper than the equivalent cheapest HD.

What does this mean? When the cheapest available HD costs the manufacturer $20 and the equivalent SSD costs $19. It might be a 500GB HD for $20 and $19 for 40GB, but it'll be cheaper. HDs offer 'enough' performance today, and their vastly cheaper cost per GB still outweighs that SSDs scale 'down' better than HDs. The cheapest HD at the moment

You're right about the $1/gig price point. My money will remain in my wallet until then. Besides price, I'm disappointed at the lack of a 60GB model. 40GB is too little; 80GB is too much. I guess I have to wait until the next release cycle for my mythical $60 60GB Intel SSD.

For laptops, the performance increase is incredible and obvious. I've replaced spinning drives in two MacBooks and a MacPro. The latter was pretty damned fast anyway and going from a 7500 RPM drive to the SSD did make a difference, but not the absolute stunning level of performance increase that I've noticed in the laptops. That might be because, as Macs, they're on the slow end of high performance (they're both circa 2007) and came with pretty sluggish hard drives to boot.

There's no comparison between the 5,600-10,000 RPM gap and the HDD-SSD gap.

I took the plunge last year and installed X-25M drives in my desktop and laptop as OS drives, with secondary drives for user data. The difference is the single greatest performance jump I've ever experienced in 30 years of upgrading, going even back to the days of replacing clock generators on mainboards to overclock 8-bit CPUs by 50 percent.

There is literally a several-orders-of-magnitude difference in the overall speed of the system. If you haven't experienced it, a description of the difference doesn't sound credible, but a multi-drive RAID-0 array of 10k drives doesn't come close to a single SSD in terms of throughput.

I can't go back to non-SSD OS installs now. Systems without an SSD literally seem to crawl, as if stuck in a time warp of some kind. Non-SSD systems seem, frankly, absurdly slow.

There is literally a several-orders-of-magnitude difference in the overall speed of the system.

LITERALLY?

I call BS. Assuming several is equal to at least 3, several orders of magnitude implies an increase of a factor of at least 1000 of your overall system performance.

Since most performance comparisons between an ordinary hard drive and a SATA drive show at most a factor of 2 difference in tasks like booting Windows you are WAY off. Even drive specific tasks like sustained reads are typically no more than a

In my case, from a three-digit (100 second+) boot process to a one-digit (8-9 second) boot process, comparing a 1TB WD Scorpio Blue drive to an Intel X-25M drive storing the OS. It was a MASSIVE difference, a ridiculous difference.

Sure, I can get 3TB for $100, but for $170 I can get a very high performance SSD that is large enough (90GB) for my needs.

Why do all my computers need terabytes of storage? Thats right.. they don't. I only need large storage on shared network media. My computers need high performance storage, not stupid amounts of extra GB's.

I know of a large company that is starting the switchover. They calculated that removing the loss in productivity caused by long OS startups more than easily pays for the cost of switching to SSDs. The math that you might use on your home computer doesn't always apply in the business world.

If you're revamping your data center to SSD's cause of boot times you're doing something really wrong. There is no way you should be able to reap any kind of cost benefit from that amount of boot time. Even workstations normally get left on most of the year anyway and IT should only be patching rebooting during off hours anyway. Not sure where you expect to gain any kind of benefit by cutting your boot times maybe you could let me know?

Starting windows up from hibernation takes 30 seconds or less on a modern laptop, mainly depending on the RAM size.

Aforementioned corporation would have saved a lot of money if it just tossed around "this is how you make your laptop start up and shut down much faster AND you can resume your work right where you left off" memo.

The 'aforementioned company' is a highly scaled, recognizable technology provider with an extremely advanced IT organization. I guarantee that if "Gee gosh-golly maybe this here 'hibernate' option will take care of things!' was an option they would have gone down that road years ago...

Its interesting you mentioned shared network storage. After I ditched my desktop 4 years ago and just went laptop, I decided to buy a NAS for mass storage. At the time, my motivation was mostly due to the fact that I was living in a small Manhattan apartment, and laptop disks were relatively slow and expensive. I was ahead of the curve then, but at the time it was unheard of even in geek circles- I am glad it is getting mentioned on slashdot and is thus getting a bit more prominence. I can definitely see th

Considering flash was about $7.50/gb in 2007, $3.80 in 2009, and is now down to about $1.71/gb, all the while capacities are increasing, I think pricing will be "competitive" in a year or two. Also we are just beginning the release cycle of the next generation- OCZ and crucial are set to release their products this month, so price/$GB could drop further in the very immediate future. Speeds are still increasing by leaps and bounds with each generation- the new vertex 3's, in real actual use, have seen sustained transfer rates over 400 MegaBYTES per second.

Adding an SSD is the best upgrade you can do to increase performance. If you look at the videos on you tube, they show that loading even the largest, slowest apps like Photoshop, CAD, WoW, etc are more than 2x as fast as a hard disk- most app loads are instantaneou-, and thus halve boot times. SSD's use a fraction of the energy, which means cooler laptops with longer battery lives, and quieter desktops that also require less cooling. You are right that SSD's aren't suitable for mass storage, I think for at least 5 years we will see hybrid setups, and then gradually we will see a move towards SSD only systems.

No they didn't, read the white paper [intel.com] about it. You can see all the capacitors involved in the anandtech review [anandtech.com] even. In theory, this has finally fixed the problem that made Intel's drive unusable for high-performance databases, that the write cache was useless in that context because it lied about writes.

Thanks, my mistake. I looked over the datasheet and product brochure and it made no mention of this. Since they were touting it prior to launch, it seems strange that it is no longer a marketing point. Hopefully the feature won't disappear, as has happened with certain other products after launch.

Oh maybe even sooner than that. A company like Intel can scale this thing right up into mass market efficiency/pricing any time it wants. It has that kind of capital (much like Apple scaled iPad right up to mass market pricing on day one).

How do you make a PC sexy again? How do you help it compete with tablets? The most important thing, IMO, is to get that spinning disk out of there and make SSDs so fast that you finally get near instant on/off.

It took them 3 years or so to go down 30% in price, maybe. It'll probably take them 2 more years to drop another 30%, and after that 1 more year to drop another 30%. At which point they'll most likely hit a wall and they'll only drop variably 30% every year, year after year.

I speculate 5 - 10 years to beat the price / performance of conventional hard drives. That's the point at which your average consumer does not find any value at all in owning a conventional hard drive. Already, many enthusiasts are will

Very bad idea. That's random writes, where SSDs are typically far slower than normal hard disks.In particular, many users with SSDs complained about freezes for several seconds at a time when using Firefox, and it turned out that all the updates to the history and cache indexes were the root of the problem. Moving it to a HDD drive, and the problem was gone. No, this was not a bug in Firefox, but a side effect of how SSD operates. IIRC, the Firefox developers still added a workaround, which to some deg

That's a problem with some controllers (Marvell's Da Vinci controller and whatever the hell WD uses in their ones, for example) that don't handle garbage collection well. after being in use for some time, the write speeds and IOPS goes all over the place. It's not a problem with better controllers, like Sandforce's stuff.

the SSD market still has a bunch of garbage in the mix and you need to research stuff to avoid getting burned.