Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Of course, if you cherry pick 1996-2012 you can get a small trend line... but if you start in 1996 (instead of 1998 like the article states, as most skeptics avoid that since it's such an easy counter-point) you have no statistically significant warming 17 years. Benjamin Santer in http://onlinelibrary.wiley.com/doi/10.1029/2011JD016263/abstract declared that "Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature."

Translated, it essentially means that if there is no significant warming for 17 year periods we need to start searching for the real causes and not just sink money in to finding more human causes to blame.

Then you add in that the sun goes in to a lull and suddenly we have no more warming and a huge number of record colds being recorded in the northern hemisphere yet the alarmist have been shouting it from the rooftops that changes in the sun are too small to affect climate citing the TSI changes rather than the changes in different frequencies (which are quite large). http://www.bbc.co.uk/news/science-environment-25771510

Maybe instead of people having a decrease of scientific understanding they are just waking up to the facts and as they learn more they realize the alarmists are hand waving ninnies.

So you mean on this page where they estimate a life span of a perfectly empty 128GiB drive using TLC nand at 2.5 years... but if it was 75% full then it would be a quarter of that, which is pretty close to what I estimated before of likely to fail between 0.85 and 2.55 years?

Of course that math is done on the assumption that 10GB per day can be spread over the entire drive which isn't the case once you have 100GB of data on it, suddenly that 10 years gets reduced to 1.7 years years and that's the estimated mean time to failure meaning the actual failure rate is probably withing +- 50% of that, so somewhere between 0.85 and 2.55 years is likely. That's bordering on the realm of "Not a reliable place to put data".
Of course, your important data should probably be stored in multiple locations locally, as well as an additional copy in another physical location anyway if you really want to keep it, but citing those figures is not anywhere near a reasonable usage pattern of most drives.

Did they mean 4 cups as in 4 250ml units of coffee... or 4 cups as in 4 actual sized coffees available at retailers that are generally 3 measured cups for the large or extra large sizes that seem so popular?

1. It's quick, just one boot and you start to notice and after your general use kicks in, so does the cache. I know because as a troubleshooting step I had to disable my SRT on my main drive a while back and it was instant that my machine was sluggish and unresponsive, but right after I turned it back on and restarted you could notice it speeding up again. A couple of hours and I wouldn't be able to tell it was ever disabled.

2. SRT runs in two modes, one where writes go directly to the drive or one where they are cached to flash first. I use the former since it's my boot drive and if something bad happens to the flash I don't want to be non-bootable. I'm not sure about how it works with the Seagate SSHD's.

3. A 120GB SSD and a 1TB SSHD is around the same price (give or take 20 bucks depending on sales and brands and things) . . . I guess it depends if you need more space than the SSD provides. My own work laptop I have about 45GB of data. But do I really need all my windows updates uninstalls fast, or the ISO of the Win7 install fast? How about my 2008 mail archives, how fast do they really need to go when I use them once a year. 8GB of SSHD cache would make this machine feel pretty snappy since the vast majority of the stuff I do every day is the same applications. Most of my actual work data isn't even on my laptop anyway, so accessing it over a VPN or corporate network is terribly slow anyway, no local storage will fix that. Plus, what if I want to take a few HD movies on a work trip, or someoen asks me to record a full res video of something for training purposes. Having the storage available is a huge plus in a lot of cases. Though an external drive would work you have to carry it around at all times and adds to the costs.

Power to the drive is cut in sleep states and power downs. How would that help make your machine feel snappy? Sure SRT and hybrid drives are slower than a pure SSD, but in cases where cost and capacity and physical space are a concern they each have their place. Would you rather boot in 9 seconds and have 120gb, 10 seconds with 1tb, or 45 seconds with 1tb and save a litle cash.
Personally, on my desktop I use a 60gb+1tb SRT and a second 1tb spinning disk for data I want to keep semi-safe from a drive failure. When I first built the machine I timed my boot times between the SSD straight up and SRT and I couldn't even detect a difference with a stop watch. Plus I don't have to worry about space management. Caching works, and fairly well. I'm not sure about their claim of 8gb being enough, but for a general purpose office machine I'm sure it works wonders to make the machine feel responsive.

Even my OC'd 3570k with a 670 in it uses under 300 watts (~290) under benchmarks and demanding games. This is also measured from the wall, and since my PSU is about 85% efficient at that level of usage my computer is actually using 246 watts or so. This means 20 watts for a drive is still going to be 8.1%.
At idle with the drive still spinning my machine is using around 80 watts or so from the wall. My PSU would be less efficient here, probably closer to 80%, which puts actual machine usage at 64 watts, making 20 watts a 31% increase.
I'm not sure of the international agreed upon amount that is falls under the label "jack" . . . but I'm pretty certain this isn't included.

The whole point of SRT is to get fast read times, and optionally fast write times if you want to risk your data. It also eliminates the need to actively manage which drives your data is on (as opposed to you putting certain programs on an SSD manually) as it will actively change the cache data depending on your usage pattern. It's works incredibly well for it's intended purpose. In my own testing I could not tell the difference between day to day use on pure SSD vs SRT. It's easy to see if you benchmark it, but boot times and app launch times are essentially the same.

Look at it this way, if you are putting your windows OS on an SSD, why do you care if your KB items uninstalls are accessed quickly? Do you really care that some DLL that never gets accessed in your system32 directory is speedy? All that garbage can sit nicely untouched on your spinning disk while the stuff that you use all the time is fast.

It's the same kind of theory behind why defrag programs for spinning disks like Ultimate Defrag work well since it keeps the stuff you want accessed quickly, and the stuff you don't care about at normal speed.