Our first screenshot is the Super Talent 16GB drive and indicates an average transfer rate of 20.5 MB/sec which is slightly lower than our HD Tach results below. The drive features an outstanding access time of 1ms or lower which greatly assists in random read times. The lack of higher sustained or maximum transfer rates will adversely affect the drives performance in most of our write tests but we must temper our performance expectations. The applications this drive is designed to run will not necessarily require high write or read speeds although they generally will respond well to the low access times. The second screenshot is the Seagate Momentus 7200.2 drive and is shown for reference only.

Hard Disk Performance: HD Tach 3.0

We are also including HD Tach results for review. Once again the order of the screenshots is the same as in our HD Tune results. In this benchmark we see a sustained transfer rate of 24.1MB/sec which is in line with the 25 MB/sec rating of the drive. Also burst rates are at 26.5 MB/sec which is close to the maximum throughput rating of 28 MB/sec from Super Talent. Super Talent is still tuning the flash controller, but HD Tach is already hitting the advertised ratings - HD Tach and HD Tune report MiB/s while drives are rated in MB/s, so we must remember the MB vs. MiB difference; 24.1 MiB/s is actually 25.3 MB/s.

There are not many SSDs that can benefit from RAID0. The issue is that the CONTROLLERS used IN these disks max out in speed before the NAND chips will. That means that the Samsung NAND chips while capable of 60+MB/S are throttled by a controller than in some cases will only do 25MB/S. In a hard disk, the media transfer rate is lower than the controller's bandwith. The hard disk controller can do 150MB/S+. So in hard drive land a 50MB/S hard disk + another 50MB/S hard disk = about 100MB/S in RAID0. But I've seen a 25MB/S SSD + 25MB/S SSD =, you ready for this? 17MB/S. DV Nation is predicting they will have a RAID0 box out later this year that can outperform a single SSD. They couldn't get the ultra-fast IDE Samsungs to RAID up. I told them I wanted to do 2X SATA SSDs in RAID and they said their customers had not had success with that.
I'm thinking newer models might in the future.

Also don't get bent out of shape between SATA and IDE in SSDs. IDEs are just as fast, if not faster than SATA. Even in the world of hard drives, IDE vs SATA does not matter in speed. Drive makers CHOOSE to make their fastest consumer drives in SATA, but even a 10 year old IDE interface is capable of 166MB/S, right? My 10,000 RPM SATA RAPTOR can only to 75MB/S, so IDE would be just as fast for it.

Modern SSDs will outlast hard disks. Forget the write cycles. They are rated between 1,000,000 and 5,000,000 write cycles. The problem is, hard disks are not rated in write cycles. For an apples to apples comparison, you need to use MTBF (mean time between failures). SSDs are rated much MUCH higher in that regard. Look at documentation on Sandisk's site, Samsungs, all the big manufacturers and independent reviewers. I've seen math done that shows life of up to 144 years! (!??!!) Reply

I think many of us would be interested in seeing exactly what RAID 0 can do for these things. It would be good to compare 2x RAID 0 of this drive vs 2x RAID 0 of the Sandisk and/or Samsung ones, and compare that to 2x RAID Raptors too.

Just be particularly flattering to Sandisk or Samsung to get another drive of them if you can. Reply

If I recall, the price point for the current (OEM) SanDisk 32GB SSD is $350 in volume. If those (which are shipping in laptops today) have much better performance than this, why would anyone use this in an industrial/medical/etc. application - pay $150 more for 1/2 the space and a slower drive? Am I missing something here?

Also, any idea of when are the SanDisk/Samsung/etc. consumer SSD's coming out?

Yeah, longer life span if you do not read/write a lot. HD wear out regardless of use but flash usually doesn't. Also, industrial environment don't usually use a lot of storage but have a lot of packaging limitation (can't fit a large HD or don't have enough cooling) that rule out HD.

Check out Hitachi's Endurastar HD, they are rated for industrial grade but are more expensive and smaller capactiy. Now that is a better comparison. Reply

Interesting review, but I have a small problem with it:
Please, compare the cost per gigabyte of the 2.5" SSD drive with the cost per gigabyte of other 2.5" mechanical hard drives.
While totally correct, the cost of $0.4/GB of current 3.5", high-capacity hard drives is much lower than the cost for the 2.5" mechanical hard drives (somewhere around the $1/GB, or slightly higher for low capacity drives).
The 16GB 2.5" SSD don't fit in the place of a Raptor, and a Raptor won't fit in the place of a SSD 16GB drive.

I'm just very disappointed with performance on these for consumer PC usage.
I mean this is solid state memory.
Somebody is going to break this wide open with performance someday, because flash is just so damn slow it's painful to write this.

Making a RAMDRIVE today (using a portion of system RAM) on our PC's is thousands fold faster only lacking volatility for persistent data.

Just duct tape some RAM sticks together on a PCB, hook a duracell to it and we should be good. ;) Well, you get the idea...We need to leverage performance of RAM today.

The article seems to imply that transfer rates are the problem with performance. In this case a RAID of 2 or 4 of these in RAID-0 would drastically increase performance. 4 of these in a Raid 0 should crush a standard hard drive as the transfer rate would always be higher and it would have blazing access times.

Though I must wonder why the CF cards are not raided as it is inside this drive. Why wouldn't the manufacturer be using 4 4GB cards in a raid array to boost the speeds themselves inside the box? Reply

quote:The article seems to imply that transfer rates are the problem with performance. In this case a RAID of 2 or 4 of these in RAID-0 would drastically increase performance. 4 of these in a Raid 0 should crush a standard hard drive as the transfer rate would always be higher and it would have blazing access times.

Yeah sure, lets take something with an already severely limited lifespan, and decrease the lifespan by abusing it with RAID . . . Lets not forget that 4 of these drives would set you back over $2000, and it makes even less sense to do so.

I have done intesive testing in this area of my own, and to tell you the truth, *you* do not need that type of performance. *you*, of course meaning, you, me, or the next guy. Besides all this, if you really want to waste you money in the name of peformance, why dont you get 4x or more servers, capable of supporting 32 GB of memory each, use iSCSI, export 31GB of ram from each server, and RAID across those. If you're worried about redundancy, toss in a couple of Raptors into the initiator, and run RAID 0+1, or RAID 10 for redundancy . . . Reply