Post Your Comment

151 Comments

A 5 year warranty is a pretty solid commitment on the part of a manufacturer. I don't think they would have done that if they didn't trust the stability of the hardware, so they really put their money where their mouth is.

Other thing: is the Indilinx co-processor 'Argon' or 'Aragon'? Pic differs from your text description.Reply

Well that, but I'm glad to see OCZ committing more to their drives... on my local price price check there's Agility, Colossus, Enyo, Ibis, Lightfoot, Octane, Onyx, Petrol, RevoDrive, Synapse, Vertex and Z-drive not counting numbering or variations like Vertex EX, Vertex Limitied Edition, Vertex Turbo and using a zillion different controllers and stuff. The warranty is also an indication this is the technology they'll continue working on and fixing bugs for, which is good because their attention span has been too short and spread too thin. It's better with a few models that kick ass than dozens of models that are all shoddy.Reply

alacard may be right, OCZ is sliding closer to the cliff as we speak. There's so much competition in the SSD market, someone's got to go sooner or later, and it will probably be the less diversified companies that will go first. I recently bought a Vertex 4 128 for my boot drive, and it lasted only 15 days before it disappeared and refused to be recognized in BIOS. The Crucial M4 128 that replaced it has the problem of disappearing every time the power is shut off suddenly (or with the power button after Windows hangs), but comes back after a couple of reboots and a resetting of your boot priorities. And it's regarded as one of the most reliable drives out there. So in order for OCZ to remain solvent, the Vector must be super reliable and stable, and absolutely must stay visible in BIOS at all times. If it's plagued by the same problems as the Vertex 4, it's time to cash out and disappear before the bankruptcy court has it's way.Reply

Actually that connection is indeed a physically identically sized/compatible m-SATA connection. The problem is it's inability to actually plug in due to the SSD's general size or whether it's able to communicate with the typical m-SATA ports on mobos.http://www.pclaunches.com/entry_images/1210/22/tra... should give a decent example.Reply

might be a sign of something else in the works from ocz like an msata cable to plug into it or something, maybe something even more awesome like double the band width by connected it to a ocz pci break off board. i guess we will seeReply

If you've got a motherboard with SATA 6Gb/s you would probably notice a difference. Whether it's worth it is up to you - do you do a lot of disk-intensive work to the point where you wish it were faster? While I'm the difference would be noticable, it might not be huge or worth spending $200+ on.Reply

It's going to take more than a nice type written letter to resolve the many product and service issues at OCZ - if they stay in business over the next six to 12 months.

FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists. In addition a five year warranty does not mean that a particular product is any better than a product with a one year warranty. For each extended year of warranty, the product price increases. So you're paying for something you may or may not ever use.

In addition it's useful to read the fine print on warranties. Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect. If you've ever seen some of the "reconditioned" or "refurbished" mobos from Asus or similar products from other companies, you'd never install them in your PC.

People reach many untrue conclusions about product quality based on the warranty.Reply

So, a longer warranty is only good if you use it? Otherwise you're paying for something you don't need?

And, you're paying extra for a 5-year warranty here? What, so all these top end SSDs, whose prices are lower than ever, are in fact over-priced with fake expensive warranties, so should come out with 1-year warranties and lower prices?Reply

a refurbished SSD? I'm not even sure what that means. That's like going to McDonald's and getting a refurbished McFlurry. It doesn't even make sense.

This isn't a laptop, where worn parts can be replaced. This is a limited lifespan, consumable product, where replacing any parts is equivalent to throwing the old one away and pulling out a brand new one. If the warranty actually says this, then please, point me to it, but otherwise, I'm gonna have to call this bluff and say it's not practical.Reply

The point that some of you seem to not understand is that the 5 year warranty does NOT mean that an SSD or other product is any better quality than a product with a one year warranty. And yes you are paying for the extended warranty no matter what the current price. SSD prices are dropping as cost to produce them is dropping. This particular OCZ model is not a highend model by any stretch, it's just the SSD-of-the-week to be superceded by a new model in a month or two.

Refurbished can mean hand soldered chip replacement or other poorly executed repairs that would not be acceptable to most technically knowledgeable consumers. Reconditioned can mean it's been laying in the warehouse collecting dust for six months and nothing was actually done to repair it when it was returned defective. You would not believe some of the crap that ships as replacement warranty products.Reply

^^^ I'm with Beenthere.A 5 year warranty means a 5 year warranty; nothing more nothing less. The notion that '5 year warranty = great product!' is asinine.

I think if you want to assume anything based off a 5 year warranty in this case, it's because the product is new, the controller is relatively new, and it's an OCZ SSD product.

I'm not likely to buy an OCZ SSD anytime soon, but I'd definitely rather buy one with a 5 year warranty than a 1 or 3 year warranty....if I have to buy an OCZ branded SSD because every other brand is sold out.

I owned a 30GB Vertex. For 9 months, it was great. Then it turned into a big POS. Constant chkdsk errors. I did a sanitary erase/firmware flash and sold it for what I could get for it.Reply

I certainly would not want a refurbished SSD. It would NOT mean new NAND chips, which are the parts most likely to be a problem. Or a new controller. I would never buy a refurbished HDD either. These devices do have lifetimes. Since you have no idea how these drives have been used, or abused, you are taking a very big chance for the dubious opportunity of saving a few bucks.Reply

I can't help but wonder how many replacement SSDs it will take to get to the end of that 5 year warranty. If you go by the track record of Vertex 3 & 4, you can expect a failure about every 90 days, so that's 20 drives, less shipping time to and from, so call it 15 drives with a total downtime of 1.25 years. Wow!.. where can I get one? My Vertex 4 lasted 15 days, but I'm sure that was just a fluke...Reply

I basically agree. From anecdotal reports, OCZ is one of the least reliable vendors, with their drives less reliable than the average HDD. While, so far, the average SSD reliability being about the same as the average HDD, despite people's expectations, this isn't good.

Most people don't need the really high speeds a few of these drives support, higher reliability would be a much better spec to have. Unfortunately, these reviews can't indicate how reliable these drives will be in the longer term.

While I see that OCZ seems to be thought of as failing, this is the first I've heard of it. Have their sales collapsed of late? I was surprised to find that their long time CEO whom Anand had communicated so often with in the past is gone.Reply

"FYI- A five year warranty ain't worth the paper it's written on if the company no longer exists." <- Depends on how you purchase it. Credit card companies will often honour warranties on products purchased from defunct companies. YMMV.

"Most state that you will receive a refurbished or reconditioned replacement if your product develops a defect." <- Happily now everyone in the thread after you has used this conjecture to knock OCZ warranties. That's not really your fault, but I don't think anyone here has read the terms of OCZ's warranty on this product yet?

The point being made here is that OCZ would not offer a 5 year warranty on the product if they thought the cost of honouring that warranty would eclipse their income from sales. This is why 1-year warranties are a red flag. So *something* can be inferred from it; just only about the manufacturer's confidence in their product. You can read into that whatever you want, but I don't generally find that companies plan to be out of business within their warranty period.

Your comment about it increasing the price of the product is odd, because this product is the same price and specification as models with shorter warranties. So either a) you're wrong, or b) you're trivially correct.Reply

Here here. I second that. I am so tired of getting worn refurbished parts for things I just bought BRAND NEW. CoolerMaster just did this for a higher end power supply I bought. Why would I want to spend a hundred dollars for a used PSU? Seriously. Now all the components aren't new in it. Once the warranty expires it'll die right away. Where is the support behind products these days?

It used to be that buying American meant you got quality and customer service. Gone are those days I guess, since all the corporations out there are about to start actually paying taxes.Reply

The GiB/GB bug in Windows accounts for almost all of the difference. It is not worth mentioning that partitioning usually leaves 1MiB of space at the beginning of the drive. 256GB = 238.4186GiB. If you subtract 1MiB from that, it is 238.4176GiB. So why bother to split hairs?Reply

When people see their 1TB-labelled drive displays only 931GB in Windows, they assume it's because formatting a drive with NTFS magically causes it to lose 8% of space, which is totally false. Here's a short explanation for newbie readers. A gigabyte (GB) as displayed in Windows is actually a gibibyte (GiB).

SSDs and HDDs are labelled differently in terms of space. Let's say they made a spinning hard disk with exactly 256GB (238GiB) of space. It would appear as 238GB in Windows, even after formatting. You didn't lose anything,because the other 18 gigs was never there in the first place.

Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.

If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows.Reply

Tbh imho using base 10 units in binary environment is just asking for a facepalm. Everything underneath runs on 2^n anyway and this new "GB" vs "GiB" is just a commercial bullshit so storage devices can be sold with flashier stickers. Your average raid controller bios will show 1TB drive as 931GB one as well (at least few ICHxR and one server Adaptec I have access to right now all do).Reply

What does that mean; usable space? Every OS leaves a different amount after formatting, so whether the drive is rated by GB or GiB, the end result would be different. Normally, SSD's are rated as to the around seen by the OS, not by that plus the around overrated. So it isn't really a problem.

Actually, the differences we're talking about isn't all that much, and is more a geeky thing to concern oneself with more than anything else. Drives are big enough, even SSD's, so that a few GB's more or less isn't such a big deal.Reply

An SSD can't operate without any over-provisioning. If you filled the whole drive, you would end up in a situation where the controller couldn't do garbage collection or any other internal tasks because every block would be full.

Drive manufacturers are not the issue here, Microsoft is (in my opinion). They are using GB while they should be using GiB, which causes this whole confusion. Or just make GB what it really is, a billion bytes. Reply

Sorry to say so, but I am afraid you look on this from wrong perspective. Unless you are IT specialist you go buy a drive that says 256GB and expect it to have 256GB capacity. You don't care how much additional space is there for replacement of bad blocks or how much is there for internal drive usage... so you will get pretty annoyed by fact that your 256GB drive would have let's say 180GB of usable capacity.

And now this GB vs GiB nonsense. From one point of view it's obvious that k,M,G,T prefixes are by default *10^3,10^6,10^9,10^12... But in computers capacity units they used to be based on 2^10, 2^20 etc. to allow some reasonable recalculation between capacity, sectors and clusters of the drive. No matter what way you prefer, the fact is that Windows as well as many IDE/SATA/SAS/SCSI controllers count GB equal to 2^30 Bytes.

Also, if you say Windows measurement is wrong, why is RAM capacity shown in 'GB' but your 16GB shown in EVERY BIOS in the world is in fact 16384MiB?

Tbh there is big mess in these units and pointing out one thing to be the blame is very hasty decision.

Also, up to some point the HDD drive capacity used to be in 2^k prefixes long time ago as well... still got old 40MB Seagate that is actually 40MiB and 205MB WD that is actually 205MiB. CD-Rs claiming 650/700MB are in fact 650/700MiB usable capacity. But then something changed and your 4.7GB DVD-R is in fact 4.37GiB usable capacity. And same with hard discs...

Try to explain angry customers in your computer shop that the 1TB drive you sold them is 931GB unformatted shown both by controller and Windows.

Imho nobody would care slightest bit that k,M,G in computers are base 2 as long as some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes.Reply

It is absurd to claim that "some marketing twat didn't figure out that his drive could be a bit "bigger" than competition by sneaking in different meaning for the prefixes".

The S.I. system of units prefixes for K, M, G, etc. has been in use since before computers were invented. They have always been powers of 10. In fact, those same prefixes were used as powers of ten for about 200 years, starting with the introduction of the metric system.

So those "marketing twats" you refer to are actually using the correct meaning of the units, with a 200 year historical precedent behind them.

It is the johnny-come-latelys that began misusing the K, M, G, ... unit prefixes.

Fortunately, careful people have come up with a solution for the people incorrectly using the metric prefixes -- it is the Ki, Mi, Gi prefixes.

Unfortunately, Microsoft persists in misusing the metric prefixes, rather than correctly using the Ki, Mi, Gi prefixes. That is clearly a bug in Microsoft Windows. Kristian is absolutely correct about that.Reply

No, he is right. Everything was fine until HDD guys decided they could start screwing customers for bigger profits. Microsoft and everyone else uses GB as they should with computers. It was HDD manufacturers that caused this whole GB/GiB confusion regarding capacity.Reply

Well, 2^10k prefixes marked with 'i' were made in a IEC in 1998, in IEEE in 2005, alas the history is showing up frequent usage of both 10^3k and 2^10k meanings. Even with IEEE passed in 2005 it took another 4 years for Apple (who were the first with OS running with 2^10k) to turn to 'i' units and year later for Ubuntu with 10.10 version.

For me it will always make more sense to use 2^10k since I can easily tell size in kiB, MiB, GiB etc. just by bitmasking (size & 11111111110000000000[2]) >> 10 (for kiB). And I am way too used to k,M,G with byte being counted for 2^10k.

"Now, according to Anandtech, a 256GB-labelled SSD actually *HAS* the full 256GiB (275GB) of flash memory. But you lose 8% of flash for provisioning, so you end up with around 238GiB (255GB) anyway. It displays as 238GB in Windows.

If the SSDs really had 256GB (238GiB) of space as labelled, you'd subtract your 8% and get 235GB (219GiB) which displays as 219GB in Windows. "

I'm pretty sure he's referring to the amount of NAND on the drive minus the 6.8% set aside as spare area, not the old mechanical meaning where you "lost" disk space when a drive was formatted because of base 10 to base 2 conversion.Reply

How long does the heavy test take? The longest recorded busy time was 967 seconds from the Crucial M4. This is only 16 minutes of activity. Does the trace replay in real time, or does it run compressed? 16 minutes surely doesnt seem to be that much of a long test.Reply

yes, I took note of that :). That is the reason for the question though, if there were an idea of how long the idle periods were we can take into account the amount of time the GC for each drive functions, and how well.Reply

Wouldn't this compress the QD during the test period? If the SSDs recorded activity is QD2 for an hour, then the trace is replayed quickly this creates a high QD situation. QD2 for an hour compressed to 5 minutes is going to play back at a much higher QD.Reply

I would love to have seen results using the 1.5 firmware for the 256GB Vertex 4. Going from 1.4 to 1.5 is non destructive. The inconsistency of graphs in other SSD reviews that included the 512GB Vertex 4 drive with 1.5 firmware and the 256GB Vertex 4 drive with 1.4 firmware drove me nuts.

When I saw the Barefoot 3 press release on Yahoo Finance, I immediately went to your site hoping to see the review. I was happy to see the article up, but when I saw your review sample was 256GB I feared you would not have updated the firmware on the Vertex 4 yet. Unfortunately, my fears were confirmed. I love your site, that's why I'm sharing my $.02 as a loyal reader.

Some of the results are actually using the 1.5 firmware (IO consistency, steady state 4KB random write performance). We didn't notice a big performance difference between 1.4 and 1.5 which is why I didn't rerun on 1.5 for everything.

Isn't this similar? Sandforce comes in, reached top speed in SATA 6Gbps, then other controller, Marvell, Barefoot managed to catch up. That is exactly what happen before with SATA 3Gbps Port. So in 2013 we would have controller and SSD all offering similar performance bottlenecked by its Port Speed.

When are we going to see SATA Express that give us 20Gbps? We need those ASAP.Reply

SATA Express (on PCIe 3.0) will top out at 16 Gbps until PCIe 4.0 is out. This is the same bandwidth as single-channel DDR3-2133, by the way, so 16 Gbps should be plenty of performance for the next several years.Reply

It is good to see anandtech including results of performance consistency tests under a heavy write workload. However, there is a small or addition you should make for these results to be much more useful.

You fill the SSDs up to 100% with sequential writes and I assume (I did not see a specification in your article) do 100% full-span 4KQD32 random writes. I agree that will give a good idea of worst-case performance, but unfortunately it does not give a good idea of how someone with that heavy a writeload would use these consumer SSDs.

Note that the consumer SSDs only have about 7% spare area reserved. However, if you overprovision them, some (all?) of them may make good use of the extra reserved space. The Intel S3700 only makes available 200GB / 264GiB of flash, which comes to 70.6% available, or 29.4% of the on-board flash is reserved as spare area.

What happens if you overprovision the Vector a similar amount? Or to take a round number, only use 80% of the available capacity of 256GB, which comes to just under 205GB.

I don't know how well the Vector uses the extra reserved space, but I do know that it makes a HUGE improvement on the 256GB Samsung 840 Pro. Below are some graphs of my own tests on the 840 Pro. I included graphs of Throughput vs. GB written, as well as latency vs. time. One the 80% graphs, I first wrote to all the sectors up to the 80% mark, then I did a 80% span 4KQD32 random write. On the 100% graphs, I did basically the same as anandtech did, filling up 100% of the LBAs then doing a 100% full-span 4KQD32 random write. Note that when the 840 Pro is only used up to 80%, it improves by a factor of about 4 in throughput, and about 15 in average latency (more than a 100 times improvement in max latency). It is approaching the performance of the Intel S3700. If I used 70% instead of 80% (to match the S3700), perhaps it would be even better.

Excellent testing, very relevant, and thanks for sharing. How do you feel that the lack of TRIM in this type of testing affects the results? Do you feel that testing without a partition and TRIM would not provide an accurate depiction of real world performance?Reply

I just re-read your comment, and I thought perhaps you were asking about sequence of events instead of what I just answered you. The sequence is pretty much irrelevant since I did a secure erase before starting to write to the SSD.

You are correct, I ran a 100% span of the 4KB/QD32 random write test. The right way to do this test is actually to gather all IO latency data until you hit steady state, which you can usually do on most consumer drives after just a couple of hours of testing. The problem is the resulting dataset ends up being a pain to process and present.

There is definitely a correlation between spare area and IO consistency, particularly on drives that delay their defragmentation routines quite a bit. If you look at the Intel SSD 710 results you'll notice that despite having much more spare area than the S3700, consistency is clearly worse.

As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up). I think there's definitely value in looking at exactly what you're presenting here. The interesting aspect to me is this tells us quite a bit about how well drives make use of empty LBA ranges.

I tend to focus on the worst case here simply because that ends up being what people notice the most. Given that consumers are often forced into a smaller capacity drive than they'd like, I'd love to encourage manufacturers to pursue architectures that can deliver consistent IO even with limited spare area available.

Anand wrote:"As your results show though, for an emptier drive IO consistency isn't as big of a problem (although if you continued to write to it you'd eventually see the same issues as all of that spare area would get used up)."

Actually, all of my tests did use up all the spare area, and had reached steady state during the graph shown. Perhaps you have misunderstood how I did my tests. I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests.

The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough.

It reaches steady state somewhere between 80 and 120GB. The spare area is used up at about 62GB and the speed drops precipitously, but then there is a span where the speed actually increases slightly, and then levels out somewhere around 80-120GB.

Note that steady state is about 110MB/sec. That is about 28K IOPS. Not as good as the Intel S3700, but certainly approaching it.Reply

Hey J, thanks for taking the time to reply to me in the other comment.I think my question is even more noobish than you have assumed.

"I just overprovisioned it so that it had almost as much spare area as the Intel S3700. Otherwise, I was doing the same thing as you did in your tests."

I am confused because I thought the only way to "over-provision" was to create a partition that didn't use all the available space??? If you are merely writing raw data up to the 80% full level, what exactly does over provisioning mean? Does the term "over provisioning" just mean you didn't fill the entire drive, or you did something to the drive?Reply

No, overprovisioning generally just means that you avoid writing to a certain range of LBAs (aka sectors) on the SSD. Certainly one way to do that is to create a partition smaller than the capacity of the SSD. But that is completely equivalent to writing to the raw device but NOT writing to a certain range of LBAs. The key is that if you don't write to certain LBAs, however that is accomplished, then the SSD's flash translation table (FTL) will not have any mapping for those LBAs, and some or all SSDs will be smart enough to use those unmapped-LBAs as spare area to improve performance and wear-leveling.

So no, I did not "do something to the drive". All I did was make sure that fio did not write to any LBAs past the 80% mark.Reply

"The conclusion to be drawn is that the Intel S3700 is not all that special. You can approach the same performance as the S3700 with a consumer SSD, at least with a Samsung 840 Pro, just by overprovisioning enough."

WOW - this is an interesting discussion which concludes that by simply over-provisioning a consumer SSD by 20-30% those units can approach the vetted S3700! I had to re-read those posts 2x to be sure I read that correctly.

It seems some later posts state that if the workload is not sustained (drive can recover) and the drive is not full, that the OP has little to no benefit.

So is an best bang really just not fill the drives past 75% of the available area and call it a day?Reply

The conclusion I draw from the data is that if you have a Samsung 840 Pro (or similar SSD, I believe several consumer SSDs behave similarly with respect to OP), and the big one -- IF you have a very heavy, continuous write workload, then you can achieve large improvements in throughput and huge improvements in maximum latency if you overprovision at 80% (i.e., leave 20% unwritten or unpartitioned)

Note that such OP is not needed for most desktop users, for two reasons. First, most desktop users will not fill the drive 100% and as long as they have TRIM working, and if the drive is only filled to 80% (even if the filesystem covers all 100%), then it should behave as if it were actually overprovisioned at 80%. Second, most desktop users do not continuously write tens of Gigabytes of data without pause.Reply

By the way, I am not sure why you say the data sets are "a pain to process and present". I have written some test scripts to take the data automatically and to produce the graphs automatically. I just hot-swap the SSD in, run the script, and then come back when it is done to look at the graphs.

Also, the best way to present latency data is in a cumulative distribution function (CDF) plot with a normal probability scale on the y-axis, like this:

One other tip is that it does not take hours to reach steady state if you use a random map. This means that you do a random write to all the LBAs, but instead of sampling with replacement, you keep a map of the LBAs you have already written to and don't randomly select the same ones again. In other words, write each 4K-aligned LBA on a tile, put all the tiles in a bag, and randomly draw the tiles out but do not put the drawn tile back in before you select the next tile. I use the 'fio' program to do this. With an SSD like the Samsung 840 Pro (or any SSD than can do 300+ MB/s 4KQD32 random writes), you only have to write a little more than the capacity of the SSD (eg., 256GB + 7% of 256GB) to reach steady state. This can be done in 10 or 20 minutes on fast SSDs.Reply

I consistently over-provision every single SSD I use by at least 20%. I have had stellar performance doing this with 50-60+ SSDs over the years.

I do this on friend's/family's builds and tell anybody I know to do this with theirs. So, with my tiny sample here, OP'ing SSDs is a big deal, and it works. I know many others do this as well. I base my purchase decisions with OP in mind. If I need 60GB of space, I'll buy a 120GB. If I need 120GB of usable space, I'll buy a 250GB drive, etc.

I think it would be valuable addition to the Anand suite of tests to account for this option that many of us use. Maybe a 90% OP write test and maybe an 80% OP write test. Assuming there's a constitent difference between the two.Reply

I should note that this works best after a secure erase and during the Windows install, don't let Windows take up the entire partition. Create a smaller pertition from the get-go. Don't shrink it later in Windows, once the OS has been installed. I believe the SSD controller knows that it can't do it's work in the same way if there is/was an empty partition taking up those cells. I could be wrong - this was the case with the older SSDs - maybe the newer controllers treat any free space as fair game to do their garbage collection/wear leveling.Reply

If the SSD has a good TRIM implementation, you should be able to reap the same OP benefits (as a secure erase followed by creating a smaller-than-SSD partition) by shrinking a full-SSD partition and then TRIMming the freed LBAs. I don't know for a fact the Windows 7 Disk Management does a TRIM on the freed LBAs after a shrink, but I expect that it does.

I tend to use linux more than Windows with SSDs, and when I am doing tests I often use linux hdparm to TRIM whichever sectors I want to TRIM, so I do not have to wonder whether Windows TRIM did I wanted or not. But I agree that the safest way to OP in Windows is to secure erase and then create a partition smaller than the SSD -- then you can be absolutely sure that your SSD has erased LBAs, that are never written to, for the SSD to use as spare area.Reply

Wouldn't it be better if you just paid half price and bought the 60GB drive (or 80GB if you actually *need* 60GB) for the amount of space you needed at the present, and then in a year or two when SSD's are half as expensive, more reliable, and twice as fast you upgrade to the amount of space your needs have grown to?

Your new drive without overprovisioning would destroy your old overprovisioned drive in performance, have more space (because we're double the size and not 30% OP'ed), you'd have spent the same amount of money, AND you now have an 80GB drive for free.

Of course, you should never go over 80-90% usage on an SSD anyway, so if that's what you're talking about then never mind...Reply

Nice results and great pictures. Really shows importance of free space/OP for random write preformance. Even more amazing is that the results you got seem to fit quite good with simplified model of SSD internal workings:Lets assume we have SDD with only the usual ~7% of OP which was nearly 100% filled (one can say trashed) by purely random 4KB writes (should we now write KiB just to make a few strange guys happy?) and assuming also that the drive operates on 4KB pages and 1MB blocks (today drives seem to be using rather 8KB/2MB but 4KB makes things simpler to think about) so having 256 pages per block. If trashing was good enough to achieve perfect randomisation we can expect that each block contains about 18-19 free pages (out of 256). Under heavy load (QD32, using ncq etc) decent firmware should be able to make use of all that free pages in given block before it (the firmware) decides to write the block back to NAND. Thus under heavy load and with above assumptions ( 7% OP) we can expect at worst case (SSD totaly trashed by random writes and thus free space fully randomized) Wear Amplification of about 256:18 ~= 14:1.Now when we allow for 20% of free space (in addition to implicit ~7%OP) we should see on average about 71-72 out of 256 pages free in each and every block. This translates to WA ~= 3.6:1 (again assuming that firmware is able to consume all free space in the block before writing it back to nand. That is maybe not so obvious as there are limits in max number of i/o bundled in single ncq request but should not be impossible for the firmware to delay the block write few msecs till next request comes to see if there are more writes to be merged into the block).Differences in WA translate directly to differences in performance (as long as there is no other bottleneck of course) so with 14:3.6 ~= 3.9 we may expect random 4KB write performance nearly 4 higher for drive with 20% free space compared to drive working with only bare 7% of implicit OP.May be just an accident but that seem to fit pretty close to results you achieved. :)Reply

Yeah, there is lot of assumptions and simplifications in above... well surely woudn't call it analysis, perhaps hypothesis would be also a bit too much. Modern SSDs have lots of bells and whistles - as well as quirks - and most of them - particularly quirks - aren't documented well. All that means that conformity of the estimation with your results may very well be nothing more then just a coincidence.The one thing I'm reasonably sure however - and it is the reason I thought it is worth to write the post above - is that for random 4K writes on the heavily (ab)used SSD the factor which is limiting performance the most is Write Amplification. (at least as long as we talk about decent SSDs with decent controllers - Phisons does not qualify here I guess :)

In addition to obvious simplifications there was one another shortcut I took above: I based my reasoning on the "perfectly trashed" state of SSD - that is one where SSD's free space is in pages equally spread over all nand blocks. In theory the purpose of GC algos is to prevent drives reaching such a state. Still I think there are workloads which may bring SSDs close enough to that worst possible state and so that it is still meaningful for worst case scenario.In your case however the starting point was different. AFAIU you used first sequential I/O to fill 100 / 80 % of drive capacity so we can safely assume that before random write session started drive contained about 7% (or ~27% in second case) of clean blocks (with all pages free) and the rest of blocks were completely filled with data with no free pages (thanks to nand-friendly nature of sequential I/O).Now when random 4K writes start to fly... looking from the LBA space perspective these are by definition overwrites of random chunks of LBA space but from SSD managed perspective at first we have writes filling pool of clean blocks coupled by deletions of randomly selected pages within blocks which were until now fully filled with data. Surely such deletion is actually reduced to just marking of such pages as free in firmware FTL tables (any GC at that moment seem highly unlikely imho).At last comes the moment when clean blocks pool is exhausted (or when size of clean pool falls below the threshold) which is waking up GC alhorithms to make their sisyphean work. At that moment situation looks like that (assuming that there was no active GC until now and that firmware was capable enough to fill clean blocks fully before writing to NAND): 7% (or 27% in second case) of blocks are fully filled with (random) data whereas 93/73 % of blocks are now pretty much "trashed" - they contain (virtually - just marked in FTL) randomly distributed holes of free pages. Net effect is that - compared to the starting point - free space condensed at first in the pool of clean blocks is now evenly spread (with page granularity) over most of drive nand blocks. I think that state does not look that much different then the state of complete, random trashing I assumed in post above...From that point onward till the end of random session there is ongoing epic struggle against entropy: on one side stream of incoming 4K writes is punching more free page holes in SSD blocks thus effectively trying to randomize distribution of drive available free space and GC algorithms on the other side are doing their best to reduce the chaos and consolidate free space as much as possible.As a side note I think it is really a pity that there is so little transparency amongst vendors in terms of comunicating to customers internal workings of their drives. I understand commercial issues and all but depriving users of information they need to efficiently use their ssds leads to lots of confused and sometimes simply disappointed consumers and that is not good - in the long run - for the vendors too. Anyway maybe it is time to think about open source ssd firmware for free community?! ;-)))

ps. Thanks for reference to fio. Looks like very flexible tool, may be also easier to use then iometer. Surely worth to try at least.Reply

So based on 36TB to end the warranty, basically you can only fill up your 512GB drive 72 times before the warranty expires? That doesn't seem like a whole lot of durability. reinstalling a few large games several times could wear this out pretty quickly... or am I understanding something incorrectly.According to my calculations, assuming a gigabit network connection, running at 125MB per second storing data, that is .12GB per second, 7.3242GB per minute, 439GB per hour, or 10.299TB per day... Assuming this heavy write usage,... that 36 TB could potentially be worn out in as little as 3.5 days using a conservative gigabit network speed as the baseline.Reply

I assume warranties such as this assume an unrealistically high write amplification (e.g. 10x) (to save the SSD maker some skin, probably). Your sequential write example (google "rogue data recorder") most likely has a write amplification very close to 1. Hence, you can probably push much more data (still though the warranty remains conservative).Reply

The write amount does actually scale with capacity, OCZ just tried to simplify things with how they presented the data here. In actuality, even the smallest capacity Vector should be good for more than 20GB of host writes per day x 5 years.

These endurance tests that they use to generate the predicted life of the SSD are with 100% fill and full span random writes. This prevents the SSD from doing many of the internal tasks as efficiently that reduce write amplification. You would need to be doing full span random writes to see these types of endurance numbers.Free capacity on the drive, and different types of data other than 4K random will result in much higher endurance.These numbers are intentionally worst case scenarios.Reply

If your usage case is saturating a Gigabit connection 24/7, you need to be buying SLC Enterprise drives (and get a better network connection :P).

36TB doesn't sound like much if you're making up crazy scenarios, but that is probably near a decade of use for a normal power-user. Another way to put it is that you'd have to re-install a 12GB game 3,000 times to get that number..Reply

So, this drive costs as much as a 840 Pro (or a little less for the 512GB version) and has slightly worse performance in most cases. But if I use more than 50% of its capacity, I get much worse performance?That's something that bugged me in the Vertex 4 reviews: you test with the performance mode enabled in pretty much all graphs, but I will use it without it, because if I buy an SSD, I intend to use more than 50% of the drive.I don't get it.Reply

You ONLY see the slow down when you write to the whole of the drive in 1 go..so you will only ever see it if you sit running HDtach or a similar bench to the whole of the drive. The drive is actually intelligent, say you write a 4.7GB file for instance, it writes the data in a special way, more like an enhanced burst mode. Once writes have finished it then moves that written data to free up this fast write nand so its available again.

It does this continually as you use the drive, if you are an average user writing say 15GB a day you will NEVER see a slow down.

The way it works is that in the STEADY STATE, performance mode is faster than storage mode. This should be obvious, because why would they even both having two modes if the steady state performance is not different between the modes?

Now, there is a temporary (but severe) slowdown when the drive switches from performance mode to storage mode, but I don't think that is what Death666Angel was talking about.

By the way, if you want a simple demonstration of the STEADY STATE speed difference between the modes, then secure erase the SSD, then use something like HD Tune to write to every LBA on the SSD. It will start out writing at speed S1, then around 50% or higher it will write at a lower speed, call it Sx. But that is only temporary. Give it a few minutes to complete the mode switch, then run the full drive write again. It will write at a constant speed over the drive, call it S2. But the key thing to notice is that S2 is LESS THAN S1. That is a demonstration that the steady-state performance is lower once the drive has been filled past 50% (or whatever percentage triggers the mode switch).Reply

In the end you do NOT fully understand how the drives work, you think you do, you do not. If a 100% write to all LBA test is run on the 128 and 256's you get what Anand shows, the reason for this is the drive is unable to move data around during the test. So...if you like running 100% LBA write tests to your drive all day them knock yourself out...buy the 512 and as you see it delivers right thru the LBA range. However if you just want to run the drive as an OS drive and you average a few GB writes per day, with coffee breaks and time away from the PC then the drive will continually recover and deliver full speed with low write access for every write you make to the drive right up till its full..the difference is you are not writing to 100% LBA in 1 go.

So what I said about it being a benchmark quirk is 100% correct, yes when you run that benchmark the 256's and 128's do slow up, however if you install an OS, and then load all your MP3's to the drive and it hist 70% of a 128 it may slow if it runs out of burst speed nand to write to BUT as soon as you finish writing it will recover..infact if you wrote the MP3's in 10GB chunks with a 1min pause between each write it would never slow down.

The drives are built to deliver with normal write usage patterns...you fail to see this though.

Maybe we need to give the option to turn the bust mode off and on, maybe then you will see the benefits.Reply

BTW the test was rin on an MSI 890FX with SB850, so an old sata3 AMD based platform...this is my work station. The drive is much faster on an Intel platform due tot he AMD sata controller not being as fast.Reply

I show you a vector with no slow down, same write access latency for 100% LBA and explain why the 2 other capacity drives work the way they do and its still not good enough.

Come to my forum, ask what you want and we will do everything we can to answer every question within the realms of not disclosing any IP we have to protect.

In fact Jwilliams email me at tony_@_ocztechnology.com without the _ and I will forward an NDA to you, sign it and get it back to me and I will call you and explain exactly how the drives work..you will then know.Reply

By the way, it seems like your explanation is, if you do this, and only this, and do not do that, and do this other thing, but do not do that other thing, then the performance of OCZ SSDs will be good.

So I have another question for you. Why should anyone bother with all that rigamarole, when they can buy a Samsung 840 Pro for the same price, and you can use it however you want and get good performance?

Heh, didn't think I'd break off such a discussion. jwilliams is right about what my question would be. And just showing a graph that does not have the slow down in the 2nd 50% is not proof that the issue of a slow down in the 2nd 50% does not exist (as it has been shown by other sites and you cannot tell us why they saw that). I also don't care about the rearranging of the NAND that takes place between the 2 operation modes, that slow down is irrelevant to me. What I do care about is that there are 2 different modes, one operating when the disc is less than 50% full, the other operating over that threshold, and that I will only use the slower one because I won't buy a 512GB drive just to have 256GB useable space. And if they two modes have exactly the same speed, why have them at all? NDA information about something as vital as that is bullshit btw. :)Reply

...these drives idle a lot more of the time than they work at full speed. A considerably higher idle is just bad all around.

I don't think OCZ's part warrants the price they're asking. Its performance is less most of the time, its a power hog, its obviously hotter, it has the downsides of their 50% scheme, and it has OCZ's (Less Than) "Stellar" track record of firmware blitzkrieg to go along with it.

I wonder, how many times will I lose all my data while constantly updating its firmware? 10? 20 times?Reply

That was beta firmware that Samsung has admitted had a problem. They said all retail drives shipped with the newer, fixed firmware. There have been ZERO reported failures of retail 840 Pro drives.Reply

Anand(tech) did lots of good testing, but seems to have left out copy performance.

Copy performance can be less than one tenth of the read or write performance,even after taking into account that copying a file takes 2 times the interface bandwidthof moving the file one direction over a single directional interface. (Seeing thatone drive is only able to copy less than 10MB/second, compared to 200MB foranother drive when both each can both read or write faster than 400MB/s over a6Gb/s interface is much more important than seeing that one can read500MB/s and the other only at 400MB/s.)

I use actual copy commands (for single files and trees) and the sameon TrueCrypt volumes, as well as HD Tune Pro File Benchmark for thesetests. (For HD Tune Pro the difference between 4 KB random singleand multi is often the telling point.)

I'd also like to see the performance of the OCZ Vector at 1/2 capacity.

I'd also like to see how the OCZ Vector 512GB performs on theOracle Swingbench benchmark. It would be interesting to seehow wht Vector at 1/2 capacity compares to the Intel SSD DC S7300.Reply

Copy Performance is tied to the block size you use when reading and writing. IE if you read 4k at a time, then write 4k at a time, you will get different performance than reading 4MB at a time and then writing 4MB. So it largely depends on the specific app you are using. Copy isnt anything special, just reads and writes.Reply

Maybe I should have explained more: I have found that most USB keys and many SATA SSDs perform MUCH worse (factor of 10 and even up to more than 300 decrease in performance) when reads and writes are mixed, rather than being a bunch of reads followed by a bunch writes.

The reads and writes can be to random locations and there stillcan be a big performance it.

A feel that a simple operating system copy of a large sequentialfile and a tree of a bunch of smaller files should be done sincethe two tests have shown me large performance differencesbetween two devices that have the about the same:. sequential read rate. sequential write rate. Read/second. Writes/secondwhen the reads and writes aren't mixed.

I also found that HD Tune Pro File Benchmark sometimes showssignificant (factor of 10 or more) differences between the Sequential 4 KB random single and 4 KB random multi tests.

(For my own personal use, the best benchmark seems to be copying a tree of my own data that has about 6GB in about 25000 files and copying from one 8GB TrueCrypt virtual disk to another on the same device. I see differences of about 15 to one between stuff that I have tested in the last year that all show speeds limited by my 7 year old motherboards in sequential tests and all performing much slower with the tree copy tests.

Since the tree is my ad-hoc data and my hardware is so oldI don't expect anyone to be able to duplicate the tests, but Ihave given results in USENET groups that shows that thereare large performance differences that are not obviouslyrelated to bottlenecks or slowness of my hardware.

There could be something complicated happening thatis due, for instance, in a problem with intermixingread and write operations on USB 3 or SATA interfacethat is dependent on the device under test but notdue to an inherent problem with the device under test,but I think that the low performance for interleaved readsand writes is at least 90% due to the device under testand less than 10% due to problems with mixingoperations on my hardware since some devices don'thave a hit in performance when read and write operationsare mixed and have sequential uni-directional performancemuch higher than 200MB/s on SATA and up to 134MB/son USB 3.

There could be some timing issues caused by havinga small number of buffers (much less than 1000), only2 CPUs, having to wait for encryption, etc., but I don'tthink these add up to a factor of 4, and, as I have said,I see performance hits of much more than 15:1for the same device, and all I did was switch from copyingfrom another flash device to the flash device under testto copying from one location on the flash device under testto another location. on the same device. Similarly, theHD Tune Pro File Benchmark Sequential 4 KB random singlecompared to 4 KB random multi with multi 4 or moretakes a hit of up to 100 for some USB 3 flash memory keys,whereas other flash memory keys may run about the same speedfor random single and multi as well as about the same speed foras the poorly performing device does for 4 KB random single.Reply

Anand, I just want to know what you think of as a difference with the new CEO sending a formal, official compared to the hand-written notes by Ryan. To me (an outsider), official letters bore me, as they are just a carbon copy of the same letter sent to many others.A handwritten note would mean more to me. Now, given that the handwritten note was more of a nudge, I can understand that perhaps a less "nudging" note would be more appreciated, but I digress.Just curious.-MarchReply

Do you have more confidence this time that OCZ is actually being honest about the contents of their controller chip? Clearly last time you were concerned about OCZ's behaviour when you reviewed the Octane (both in terms of reviewing their drives and allowing them to advertise) and they out right lied to you about the contents of the chip, they lied to everyone until they got caught.

This time do you think the leopard has changed its spots or is this just business as usual for a company that cheats so frequently?Reply

If these are priced to compete with Samsung's 840 Pro, only a die-hard OCZ fanboy would buy one, since the 840 Pro beats it in almost every benchmark, and is considered the most reliable brand, while OCZ has a long, rich history of failed drives, controllers, and firmware. Even if they were priced $50 below the Samsung I wouldn't buy one, at least not until they had 6 months under their belt without major issues. It get's old re-inventing your system every time your SSD has issues.Reply

I noticed that in the consistency testing, the Intel 330 seemed to outperform just about everything except the Intel 3700. That seems like a story worth exploring! Is the 330 a sleeper user-experience bargain?Reply

For one thing, it did not look to me like the 330 had yet reached steady-state in the graphs provided. Maybe it had, but at the point where the graph cut-off things were still looking interesting.Reply

In one of the podcasts (E10?) Anand talks about how SF controllers have less issues with these IO latency worst case scenarios. So it's not necessarily an Intel feature, but a SF feature and the graph might look the same with a Vertex 3 etc.Also, it may behave differently if it were filled with different sequential data at the start of the test and if the test were to run longer. I wouldn't draw such a positive conclusion from the test Anand has done there. :)Reply

did they have to name their 2 drives Vector and Vertex? they couldnt have picked 2 names that looked more alike if they tried. i have to image this was done on purpose for some reason that i can think of. now that ocz has its own controller are they retiring the vertex or will they just use barefoot controllers in vertex ssd's going forward?Reply

While it is great that the OCZ Vector is able to compete with the Samsung SSDs in terms of performance, but OCZ's past reliability records have been iffy at most, they fail prematurely and RMA rates have been quite high. I've known countless people suffering issues with OCZ drives.

I'll wait for a bit before recommending OCZ drives to anyone again due to reliability issues if the OCZ Vector can meet the reliability of Corsair, Intel or Samsung drives. Until then, I'll keep recommending Samsung drives as they exceed in performance and reliability than most manufacturers.Reply

If you check the review of the Vector on the hardwarecanucks website, page 11 you will see the Vector AND Vertex crush every other drive listed when filled with over 50% capacity. This is probably the most important bench to judge SSD performance by.

"While the Vector 256GB may not have topped our charts when empty, it actually blasted ahead of every other drive available when there was actual data housed on it. To us, that’s even more important than initial performance since no one keeps their brand new drive completely empty. "Reply

you are right and I wish they did include the 840 Pro but they didn't. Point is compared to all those other SSDs/Controllers, the BF3 clearly outperforms everything in the real world with actual data on the drive. The 840 Pro uses faster nand than the Vector yet both drives are pretty much equal. The toshiba toggle version of Vector cant come soon enough!Reply

The Samsung 840 Pro is significantly faster (about 33%) than the Vector for 4KiB QD1 random reads. This is an important metric, since small random reads are the slowest operation on a drive, and if you are going to take just one figure of merit for an SSD, that is a good one.Reply

well according to most sites, the Vector beats it on writes and in mixed read/write environments especially with heavy use. Not to mention the 840 takes a long time to gets its performance back after getting hammered hard whereas the Vector recovers very quickly.Reply

I have seen nothing to suggest the Vector recovers more quickly. If anything, there is circumstantial evidence that the Vector has delayed recovery after heavy writes (assuming the Vector is similar to the Vertex 4) due to the Vector's quirky "storage mode" type behavior:

You don't. Most SSDs will come in 128, 256 or 512 GB sizes. If you have an SSD and you see a decrease in size, usually at 120, 240 or 480 GB sizes, it means the controller has already over provisioned the SSD for you.Reply

I have WAY too much scar tissue from this vendor to ever buy their products again. I bought five of their SSDs, and was five for five RMAing them back. I have the replacements, but don't trust them enough to use them in anything other than evaulation work because them are just not dependable. I would avoid them like the plague.Reply

I have had multiple SSDs from OCZ, and none of them have failed up till today. I boot Mac OS X from my OCZ Vector, and from every OCZ SSD before that. In my experience, it's not the OCZ SSDs that have terrible reliability, it's Windows. Besides, have any of you guys complaining about OCZ SSDs ever tried turning off automatic disk defragmentation in Windows?Windows has an automated disk defragmenting tool to defragment HDDs, but when you plug in an SSD, the tool is automatically disabled.Chances are, those of you with SSD problems have a PC with windows that did not successfully disable automated disk defragmentation, and have had your SSDs killed due to that.Mac OS X does not have an automated disk defragmenting tool as it generally tries not to write in fragments. Without the automated defragmentation tool, my OCZ SSDs have never failed.Reply

My OCZ VTR1-25SAT3-512G failed after just 33 days. This was 3 days after the vendors replacement agreement expired. Has to go to OCZ, OCZ is replacing the drive, but they are following a delayed time frame to get the new drive in my hands.Reply