I'm not sure what it is about SSD manufacturers and overly complicated product stacks. Kingston has no less than six different SSD brands in its lineup. The E Series, M Series, SSDNow V 100, SSDNow V+ 100, SSDNow V+ 100E and SSDNow V+ 180. The E and M series are just rebranded Intel drives, these use Intel's X25-E and X25-M G2 controllers respectively with Kingston logo on the enclosure. The SSDNow V 100 is an update to the SSDNow V Series drives, both of which use the JMicron JMF618 controller. Don't get this confused with the 30GB SSDNow V Series Boot Drive which actually uses a Toshiba T6UG1XBG controller, also used in the SSDNow V+. Confused yet? It gets better.

The standard V+ is gone and replaced by the new V+ 100, which is what we're here to take a look at today. This drive uses the T6UG1XBG controller but with updated firmware. The new firmware enables two things: very aggressive OS-independent garbage collection and higher overall performance. The former is very important as this is the same controller used in Apple's new MacBook Air. In fact, the performance of the Kingston V+100 drive mimics that of Apple's new SSDs:

Apple vs. Kingston SSDNow V+100 Performance

Drive

Sequential Write

Sequential Read

Random Write

Random Read

Apple TS064C 64GB

185.4 MB/s

199.7 MB/s

4.9 MB/s

19.0 MB/s

Kingston SSDNow V+100 128GB

193.1 MB/s

227.0 MB/s

4.9 MB/s

19.7 MB/s

Sequential speed is higher on the Kingston drive but that is likely due to the size difference. Random read/write speed are nearly identical. And there's one phrase in Kingston's press release that sums up why Apple chose this controller for its MacBook Air: "always-on garbage collection". Remember that NAND is written to at the page level (4KB), but erased at the block level (512 pages). Unless told otherwise, SSDs try to retain data as long as possible because to erase a block of NAND usually means erasing a bunch of valid as well as invalid data and then re-writing the valid data again to a new block. Garbage collection is the process by which a block of NAND is cleaned for future writes.

If you're too lax with your garbage collection algorithm then write speed will eventually suffer. Each write will eventually have a large penalty associated with it, driving write latency up and throughput down. Too aggressive with garbage collection and drive lifespan suffers. NAND can only be written/erased a finite number of times, aggressively cleaning NAND before it's absolutely necessary will keep write performance high at the expense of wearing out NAND quicker.

Intel was the first to really show us what realtime garbage collection looked like. Here is a graph showing sequential write speed of Intel's X25-V:

The almost periodic square wave formed by the darker red line above shows a horribly fragmented X25-V attempting to clean itself up at every write. Eventually, with enough writes, the X25-V will return to peak performance. At every write request the X25-V controller will attempt to clean some blocks and return to peak performance. The garbage collection isn't seamless but it will eventually restore performance.

Now look at Kingston's SSDNow V+100, both before fragmentation and after:

There's hardly any difference. Actually the best way to see this in work is to look at power draw when firing random write requests all over the drive. The SSDNow V+100 has wild swings in power consumption during our random write test ranging from 1.25W to 3.40W. The swings would happen several times in a window of a couple of seconds. The V+100 is aggressively tries to reorganize writes and recycle bad blocks, more aggressively than we've seen from any other SSD.

The benefit of this is you get peak performance out of the drive regardless of how much you use it, which is perfect for an OS without TRIM support - ahem, OS X. Now you can see why Apple chose this controller.

There is a downside however: write amplification. For every 4KB we randomly write to a location on the drive, the actual amount of data written is much, much greater. It's the cost of constantly cleaning/reorganizing the drive for performance. While I haven't had any 50nm, 4xnm or 3xnm NAND physically wear out on me, the V+100 is the most likely to blow through those program/erase cycles. Keep in mind that at the 3xnm node you no longer have 10,000 cycles, but closer to 5,000 before your NAND dies. On nearly all drives we've tested this isn't an issue, but I would be concerned about the V+100. Concerned enough to recommend running it with 20% free space at all times (at least). The more free space you have, the better job the controller can do wear leveling.

Post Your Comment

96 Comments

I've ready your other posts and you are selecting a very specific situation which most people will not encounter (that is copy/paste using XP with no SSD tweaks). That's like putting in an NVIDIA video card and not installing the drivers and complaining about performance. The XP OS was designed so far before SSD's were even available to enterprise markets let alone the consumer space its no wonder they don't perform well.

The vast majority of people are no longer on XP (trust me I was one of the last to hold out due to my dispise for Vista). But you are artificially setting limitations on a product and then using it to generalize performance in all situations that are to you "real world".

I have real concerns about the Sandforce drives due to variable performance. I have an Intel 80gig G2 SSD and am still amazed at how well it does (to me) in the most important benchmarks, that is random read/write and sequential read. Even with the cheap(er) pricing of SSD's and the *hopefully* significant decrease once we get to the next node with Intel's 3rd generation these are still not capable of storing the majority of our data (unless you think 1-2 grand is chump change). That being said these are mainly going to be access drives and rarely used for moving large amounts of data around (which is why the sequential write becomes much less important for day to day use, besides the odd large install such as a video game).

I keep my 80gig with about 45GB free. Win7 install (including User data), a couple programs, the current 1-3 games I'm playing and that's it. I have a 250GB secondary 7200rpm drive for music/movies/etc. and a larger external HDD for backup and infrequently used data.Reply

Quote:these are mainly going to be access drives and rarely used for moving large amounts of data around (which is why the sequential write becomes much less important for day to day use, besides the odd large install such as a video game).---------------------------------------------------------------------------------------

In that case, I'd grab a Crucial Real SSDBest read speeds now on Sata 2 and reasonable for Sata 3

Quote:The vast majority of people are no longer on XP (trust me I was one of the last to hold out due to my dispise for Vista). But you are artificially setting limitations on a product and then using it to generalize performance in all situations that are to you "real world".-------------------------------------------------

Where did you get that incorrect data?It will be a few years before Windows 7 sells more copies than XP

XP is still used on more computers than Vista and Windows 7 combined!

Any SSD manufacturer who misleads their customers into thinking that their SSD's are plug and play compatible with XP by omission of pertinent information should go into spam marketing!

Why should the consumer spend another $200 for a new OS to use a Vertex when Intel SSD's work just fine with XP without all the tweaks?Reply

I can't stand all the SSD articles with no real world benchies at all. I don't care about transfer speed or even IOPS. I care about how fast it boots, how fast it loads my work, displays my porn, plays my games, etc. Reply

I only see theoretical performances of the drives, while this is some indication of how it would perform in real case scenarios, its not the most accurate and its hard to translate into real world.

I would like some tests that test all of the SDD's abilities at once.For example running a virus scan while pasting 2GB or big and small data, while moving hundreds of pictures into a image viewer, all on top while downloading a torrent and surfing the web.Reply

Objective performance measurements are good to have, however these comments are suggesting some difficulties in interpretation because the majority of us don't know how to relate to a synthetic benchmark. My recommendation is to create some user profiles (gamer, video artists, graphics artist, serious web surfer, media PC, low power, etc). Have some people utilize the machines and ask for their feedback.

Because here's my bottom line: I don't want to pay for more than I'll notice. If PEOPLE can't tell the difference between a top synthetic rated drive versus middle of the pack....then I don't want to pay extra.

For example: If a more expensive SSD drive will not speed up my Media Center PC, I don't want to pay the extra. I want to know what a real person thinks and experiences. Rather talk to my buddies than get comments from a robot performing synthetic tests.Reply

Dear Anand, I would love a reply here, since I've raised these points in the past, and been ignored."Concerned enough to recommend running it with 20% free space at all times (at least). The more free space you have, the better job the controller can do wear leveling."I feel the need to critique this statement. If TRIM is not active, there is no difference whatsoever if you use 10% or 100% of the LBAs avalible to the user, since the drive can't tell the difference.The only way you could make sure 20% of the NAND is free at all times is to not partition it in the first place (or not include it in a partition after a secure erase).

Also, you've included 4KB random write at QD 32, but still not 4KB random read. If you compare your results, you will see a much larger difference between read performance as QD scales than you do for write. This is due to write coalessing and attenuation, while reads don't have this benefit.I feel both random read and write @ QD 32 should be included to show what the drives are capable of within the NCQ spec. Ideally would be a graph showing IOPS by QD scaling, but i understand that it may be a bit much work.When benchmarking my own SSDs, i scale 4 dimentions in IOmeter; Read:write ratio, seq:ran ratio, block size, and QD. Due to size of the data set, i only scale them against each other at 4KB and a couple of larger sizes. A few thousand data points is enough pr drive :PReply

Quote:“SandForce's partners who have to pay a big chunk of their margins to SandForce as well as the NAND vendor are actually delivering the best value in SSDs. Kingston and Western Digital also deliver a great value. Not Crucial/Micron and not Intel, which is not only disappointing but inexcusable. These companies actually own the fabs where the NAND is made and in the case of Intel, they actually produce the controller itself”.

Are you forgetting that Intel spent over 2 billion on the JV with Micron? They are the ones carrying the investment risk, not the likes of OCZ.

Also Intel spend money and time on getting their products right before market and use much higher quality components throughout, not least better QA’d NAND. (Intel do not sell SSD grade NAND. They sell NAND that has to be further processed by whoever buys it).

You have consistently ignored these facts and they make a big difference.

Whilst I’m at it I can’t understand your assertion that SF drives are so good. There are countless problems with them even now after numerous f/w updates. They degrade badly with uncompressible data, they have weak GC and they don’t trim in the same way as other drives. It’s quite easy to see performance drop by 50%, even OCZ have admitted that. Read performance, without writing also degrades.

I used to look forward to your reviews but it seems your objectivity when it comes to the X25 and C300 in favour of SF drives is seriously lacking.Reply

"Here is the thing, if you hammer the driver with say enough writes that the drive would under normal use/see in 7 days within a few hrs, the drive will slow down for 7 days, maybe longer. It does this to protect the nand life. So your guys seeing a 50% drop may actually see 30% which is the normal drop, then a further 20% because at some stage they have hammered the drive and then not realised its going to take 5 days or longer for the speed to creep back up. Also remember this write quantity slowdown is further impacted by how you use the drive after you have hammered it."