I guess the adaptivity feature is applied to fewer applications than the array in your test suite, and also requires more than five rounds to settle.

You could do a test where two or three typical applications are set to autostart, and then re-boot the computer ten times to see how/if there's a difference in startup time from first to last boot.

I don't even see the OS boot time as the "killer functionality" of this drive, instead the quick startup of the common apps is the key. On my PC I might open web browser, email, paint.net etc. several times a day. Launching those apps blazingly fast during normal usage is the essential thing. So the reboots might be unnecessary, but instead launch Firefox 10 times and Outlook 10 times etc. And then test those again.

I considered this drive whenI built my PC a month ago. However, its availability in Europe wasn't that great in July/August and I bought an ordinary HDD.

But I read all the reviews that I found, and it seems to me that many reviews approach the drive wrongly. It is not an SSD. It is a 500 GB traditional HDD with 4 GB permanent read-cache with smart allocation algorithm, which should make the frequently used applications to float into the cache.

SilentPCReview's test approach was rather reasonable, but probably involved too much data like Olle speculated.

I would otherwise upgrade my current HDD to Momentus XT in a few weeks, but I have decided to wait until the firmware stabilises and Seagate solves the compatibility problems that seem to burden some users. I also dislike the notenook-oriented power-saving auto-spin-down features, so I would prefer Seagate to create a "desktop version" out of this.

On my PC I might open web browser, email, paint.net etc. several times a day. Launching those apps blazingly fast during normal usage is the essential thing. So the reboots might be unnecessary, but instead launch Firefox 10 times and Outlook 10 times etc. And then test those again.

If you restart apps often, lots of RAM tends to shield you against a hard disk bottleneck in Windows. Apps tend to stay in memory, so the hard disk is much less of a factor for the second launch of anything, except if it's really huge (or you have a swap file active for no good reason). Not sure what you'd actually be testing by closing and lanching apps repeatedly. Windows memory management?

Faster first launch of an app, now that you can not go around by adding RAM, so yeah, it is a very important factor. I'd actually like to see if the drive can optimize that flash cache better given enough time and a stable environment (think a few apps always launched with Windows plus a few random ones).

_________________Can you keep it down? I'm having trouble hearing the artillery.

I picked one up pretty quickly once they became available. I've been a Seagate fan for a while, and I really needed an upgrade from my 250G 5400 drive I was using before. I would have a hard time being more pleased with this drive. I have an Antec P150 case and have it rubber band mounted, its well below my ambient noise floor. It is obviously much faster than the drive it replaced, but I don't have real numbers to back that up.

The guy asking about survivability of the data in the flash.. it really isn't a concern, as the flash is used as a read cache. Writes go directly to the disk, and worst case on the flash going corrupt is you have to read it from the platters again.

Supposedly, the Flash is caching data that is used multiple times, and I've seen benchmarks where it has improved performance in real-world use cases over time. Most test suites are doing a bunch of raw i/o, which is not a good case for caching on the flash chip. In theory, instead of everything getting faster, launching Firefox should get faster.. And if you reboot once a day, that should get faster too. But its all fuzzy..

The guy asking about survivability of the data in the flash.. it really isn't a concern, as the flash is used as a read cache. Writes go directly to the disk, and worst case on the flash going corrupt is you have to read it from the platters again.

Probably I wasn't clear (maybe as english isn't my language).

However, my concern is related to Flash/SSD allegedly relatively high failure rate (I've never seen an SDRAM cache failed): even if we're talking about SLC flash and not MLC nand, I was wondering what if that huge cache will cease to work permanently.

Does the entire disk become inaccessible, or could the drive survive and continue to work without it (more slowly)? Is that flash cache introducing a further point of failure?

If I'd loose a 30-64GB boot SSD, it could be a problem: but loosing 500GB of data is definitely a bigger one. IMO, of course.

My take is that this is maybe a decent replacement for the stock/slow hdd in a laptop. If you have a desktop, just get a small SSD for OS/apps and slow/quiet HDD for data and get the best of both worlds.

It's an interesting tradeoff between spending money on a flash cache and adding to system RAM. System RAM can be used as a cache, much faster than flash, but doesn't survive (non-hibernate) power cycles / reboots.

The guy asking about survivability of the data in the flash.. it really isn't a concern, as the flash is used as a read cache. Writes go directly to the disk, and worst case on the flash going corrupt is you have to read it from the platters again.

Probably I wasn't clear (maybe as english isn't my language).

However, my concern is related to Flash/SSD allegedly relatively high failure rate (I've never seen an SDRAM cache failed): even if we're talking about SLC flash and not MLC nand, I was wondering what if that huge cache will cease to work permanently.

Does the entire disk become inaccessible, or could the drive survive and continue to work without it (more slowly)? Is that flash cache introducing a further point of failure?

If I'd loose a 30-64GB boot SSD, it could be a problem: but loosing 500GB of data is definitely a bigger one. IMO, of course.

Regards, Luca

I wouldn't worry about the cache dieing. It likely has a lifespan measured in decades.

I'd say the failure modes for this drive are the same for any rotating disk drive. The controller board as a whole can fail, sectors can be bad, motor failure, etcetera.

The thing is to me this drive isn't worth much of a premium over traditional hard drives. Maybe 20%.

I can get a traditional hard drive for $50 that would perform closely to this so I'd be willing to pay no more than $60 for the XT.

Now I'm happy to take the 250 or 320GB version. I don't care about the capacity as much as performance.

They either need to drop the price on the XT series or seriously bump the performance. It needs to beat the velociraptor in the majority of benchmarks or be cheap enough to make me consider it instead of a Samsung 3.5" hard drive (which is a $35 to $50 item).

_________________.Please put a country in your profile if you haven't already.This site is international but I'll assume you are in the US if you don't tell me otherwise.RAID levels thread http://www.silentpcreview.com/forums/viewtopic.php?p=388987

I don't even see the OS boot time as the "killer functionality" of this drive, instead the quick startup of the common apps is the key. ... So the reboots might be unnecessary, but instead launch Firefox 10 times and Outlook 10 times etc. And then test those again.

You missed my point.

Modo wrote:

If you restart apps often, lots of RAM tends to shield you against a hard disk bottleneck ...Faster first launch of an app, now that you can not go around by adding RAM, ...

That's one of the reasons I suggested re-boots for the test sequence.
I also suspect that the adaptation software is more aimed at predicting what software will be used in most sessions (between re-boots) rather than number of times used per session. That's why it's necessary to re-boot the computer multiple times.
The auto-start I threw in there just to make it easier for the person conducting the test, and it probably shouldn't be used for the first and last test sequence. It's the time required to launch the application for the first time after a fresh boot that matters.

The thing is to me this drive isn't worth much of a premium over traditional hard drives. Maybe 20%.

I can get a traditional hard drive for $50 that would perform closely to this so I'd be willing to pay no more than $60 for the XT. ...It needs to beat the velociraptor in the majority of benchmarks or be cheap enough to make me consider it instead of a Samsung 3.5" hard drive ...

I think that's a matter of priorities.
- Can you find a $50, 500GB, with similar speed, noise and power consumption (regardless of size)?
- Can you find another 2.5" HDD that's similar in all aspects but price?

Sure the Velociraptor is faster, but I wouldn't stick one into my laptop...

My take is that this is maybe a decent replacement for the stock/slow hdd in a laptop. If you have a desktop, just get a small SSD for OS/apps and slow/quiet HDD for data and get the best of both worlds.

Yeah my wife's laptop has a slow bootup and application start time, partly due to the fact it's a CULV based system. Options like this appeal to me as you still have tons of space, but it sounds like it should give you the performance boost too.

I can't recommend the Momentus XT without reservations. What drive would you recommend in it's place?

Even if you get a Momentus XT you are driven to get the 500GB version based on the (lack of) price difference.

Conclusion / TL;DRI'd probably look at an SSD for performance or the Samsung Spinpoint M4 drives for lower cost storage in a 2.5" form factor with decent performance instead of getting a hybrid HD. In 3.5" I'd look at the Samsung Ecogreen F4. Maybe the next generation hybrid will be better but until then traditional hard drives are the safe bet and SSDs the option for those with money to spare.

Considering Western Digital and Seagate will have a practical Duopoly in this space within a years time I really hope one or both of them figure out how to make hybrid drives a no brainer. I'd love to stop using traditional HDs. I'd love for OEMs to stop putting traditional drives in prebuilt PCs/laptops/netbooks/etc.

_________________.Please put a country in your profile if you haven't already.This site is international but I'll assume you are in the US if you don't tell me otherwise.RAID levels thread http://www.silentpcreview.com/forums/viewtopic.php?p=388987

Until prices come down SSD's are going to be enthusiast/niche market and it's going to take a while before they're appealing offering enough capacity and priced at more realistic levels. I'd say we're quite a few years away from them dominating the market. Yes they're great for no noise but the technology is not mature enough IMO reliability is also a concern.

It is a shame to see Hitachi absorbed into WB and Samsung into Seagate as I've used both makers in preference to other ones. At this stage I'm not convinced with Seagate as I've seen too many problems in the last few years. WD i'd be a bit more comfortable with. The hybrid solution might make sense during the transition period.

Considering Western Digital and Seagate will have a practical Duopoly in this space within a years time I really hope one or both of them figure out how to make hybrid drives a no brainer. I'd love to stop using traditional HDs. I'd love for OEMs to stop putting traditional drives in prebuilt PCs/laptops/netbooks/etc.

This Silverstone product has been around since long before Seagate cam out with theirs:

Time has moved on, the SSD caching on the Z68 chipset removes the need for hybrid drives. It functions better and removes several of the downsides e.g. you can defrag.

There are already a few desktop motherboards shipping with a 20GB SSD installed and a few laptops that contain the prerequisite hardware but do not have it enabled. The Lenovo X220 has harddisk, a msata slot (see Intel 310 SSD) and the laptops chipset supports SSD caching. But Lenovo are yet to enable the option. I don't know why, maybe the extra validation / licensing costs too much or maybe selling normal SSDs is more profitable.

Time has moved on, the SSD caching on the Z68 chipset removes the need for hybrid drives. It functions better and removes several of the downsides e.g. you can defrag.

Not true unless you can somehow magically replace the millions upon millions of existing motherboards in use in business and home PCs that don't have the Z68 chipset.

Look at it another way. SSD caching would require at least one SSD per motherboard. Annual sales of motherboards are in the range of 100 million a year but SSDs sold in 2010 were under 7 million and estimates for 2011 are only 15 million.

Consider also that many SSD users will buy more than one SSD per motherboard they own (RAID, testing, backups, etc). It'll be a long time until every motherboard or OS that can do some sort of SSD cache will have the SSD to go it with. And it'll be an even longer time until all the ones that can't stop being used.

_________________.Please put a country in your profile if you haven't already.This site is international but I'll assume you are in the US if you don't tell me otherwise.RAID levels thread http://www.silentpcreview.com/forums/viewtopic.php?p=388987

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum