The flaw is not really a big flaw. It only effects - reportedly - effects USB 3.0 devices which are attached when you put the system 'to sleep'. Restart any progs that mess up is the short term solution...and thats the scope of the problem. It shouldnt - reportedly...from what I heard - effect USB 2 devices. So as long as you dont leave your USB 3 stroage drive plugged in...and / or make sure to save your work before before putting the system into low power mode....you will be fine...or atleast that is what I have heard. Wont know for sure UNTIL it lands...but thats what I have heard :)

How about any issues regarding this and anything else you may find that can be found and adding them to the summary?

Is it at all possible that the integrated graphics into the Sandy Bridge, Haswell, etc platforms are part of why processor speed hasn't really seemed to significantly increase? I don't think it's as simple as 'it's taking horsepower space on the chip', but I'm curious as to if it's part of why the clock speed increases haven't been dramatic, or if it's a matter of the limits of silicon, marketing or something completely different (I'm not particularly knowledgeable on the CPU structures, limits and inner workings).

A lot of the reasons why clock speed hasnt ramped can be boiled down to the fact that frequency isnt the limiting factor in making things run faster. That sounds like a hypocrisy, but its not; Somewhere else is usually the bottleneck, and not at the 'processing' level due to speed. It is referred to as "Transmission Delay" (or at least that is how it was relayed to me ). This is why more cores started getting added. You can thus process more information in an equal time. For example, in perfect theory (and parallelism) it takes two cores at 2ghz to process the same amount of information as one core at 4ghz. This obviously doesnt take in to account a slew of factors that change this, but it gives a great idea.

In addition to that, there are also the physical limitations. Thermally, things get out of hand as clock speeds ramp up. Having transistors switch on and off (or going from 0v to 1v in real terms) takes time. Gate leakage in your transistors occurs. etc. Essentially we're limited by the properties of the silicon, and the laws of Physics.

The Prescott era processors were a great example of the problems with ramping clock speeds unduly, and the problems associated with it.

__________________
My Disclaimer to any advice or comment I make;

Quote:

Originally Posted by CroSsFiRe2009

I'm a self certified whizbang repair technician with 20 years of professional bullshit so I don't know what I'm talking about

I guess you could always go back to the IB or SB reviews and then just scale the numbers roughly. Might not be a perfect comparison, but it shouldn't be amazingly far off.

Yea but I'm Laaazzzzzzeeeeeeee :P

I know I can but I then have to match the few tests that are the same and also it doesn't take into consideration any software based performance increase that the 920 or whatever may not have benefited from.. ie I'd be guessing

That said I will have to hold my opinion until I see what they are really capable of with regards to the OCing. I have had my 2600K@4.8GHz stable for ages and can even get it up to 5.1GHz on occasions. For me to even consider one of these I would have to be able to get at the least an equivalent OC.

__________________

"Nothing sucks more than that moment during an argument when you realize you're wrong."