I've reconsidered and I'm not going to buy the Icy Vision. The lowest I've found it is about $50 and I don't think I'll get my money's worth, unless I could OC to some crazy speed like >900Mhz (doubtful).

Instead I replaced the TIM on the stock heatsink and temps went down about 3ºC with CM Ice Fusion, which is pretty low end. The old thermal paste was all hardened up and cooked.

I think getting a better TIM is the way to go. Even if I can't get my HIS above 824Mhz I'd like it to be cooler. What's a good TIM for GPUs?

Yes 3DMark06 is designed to test the full suite of DirectX9 capabilities of your grafx card. Very much similar to Crysis 2. This means that all of the GPU DirectX work paths are stressed when you run 3DMark06 - more than 3DMark05, 03, etc. Many older games also fail to test the full GPU.

Crysis 2 is now the leader in stressing your DX9 hardware. If you are Crysis2 stable (no hanging, no crashing) then your equipment is truly stable.

I was only able to run 3DMark06 at 837Mhz GPU with my room very cold at 18°C. I cannot run such a speed at 20°C room temperature.

I believe the GELID TC-GC03-A Extreme TIM is the best you can buy now - just do search for the many good reviews. I have always used Arctic Silver 5, but the GELID product is claimed to be better.

I am sorry to hear you will not use the GELID Icy Vision 2 - it is the King of coolers today. I am sure you can keep it for your next card upgrade.

Maybe if I find a used one I'll bite. I don't want to sink a lot of money into this since there is no upgrade path left; I only bought the QX6700 because I got it for the same amount that the Q6700 goes so basically I got a 600Mhz bump for free.

Thanks for the suggestion on the GELID GC03, I'll look into it. I'll test with Crysis 2 later.

Let me first say that I have never used GELID's TIMs.
However there are so many positive reviews, I cannot imagine it is no good. There are many reputable sites on this list. I still use AS5 regardless of its many shortcomings (I still have alot left).

Would that work fine with ATi GPUs? I recall a TIM roundup at xbit labs where the IC Diamond got to the top of the CPU TIMs but fared badly with gpus.

Click to expand...

If you are directing your question at me - I have used MX-4 on 1 4870, 2 5850's and 1 8800GTS(G92) On my laptops as well as on my old Q9550, on my current i5 rig and any other rig ive built or rebuilt over the last year or so.

I would put MX-4 on my 6970's but XFX wont allow me to take the cooler off without voiding warranty.

theres nothing I like better then watching my £300+ a peice graphic cards overheat and die... courtesy of those great people at XFX who truely are 120% behind the enthusiast

ATI GPU's have very "poor" mating surfaces. This is because ATI wants to laser mark their GPU's with ENORMOUS company logo (ATI). Also, almost all ATI GPU surfaces are "dished" i.e. the corners of the die are "higher" than the center of the die. This means that no matter how flat your heatsink may be (polished, mirrored, etc.), you cannot achieve an excellent interface without dishing your heatsink identical to the GPU dish contour.

The only way to achieve the best possible thermal interface on an ATI GPU die is to apply a small bead of TIM at the center of the die (as close as you can - use a popsicle stick to push the bead into position).

Then assemble the heatsink assembly completely. If you are using a thick TIM (like AS5) use a hair dryer or other heat gun to heat up the heatsink (carefully - do not melt any plastics) as much as possible to ensure the bead of TIM has spread completely throughout the "dish" gap between the GPU and the heatsink.

The real secret to best success is to use as small a bead of TIM as possible and still achieve full die coverage. Remember you need enough to fill in the ATI logo, and cover the rest of the die. I estimate the best bead size for the HD3850 is about the size of a medium couscous (dry) or a red lentil (dry).

Well, apparently the system isn't 100% stable. I've been replaying Crysis 2 (at Post-Human Warrior difficulty is it me or it's actually easier? ) and from time to time the game crashes. Sometimes I can go for hours without a crash and then bam! and at other time crash in five minutes and sometimes I finish my play session without any issue.

The crashes seem to be random, I've got them at calm scenes and in heavy fire situations so I can't really put where the blame lies. Might also be the motherboard or CPU, as at FSB272 the system crashes (currenly I'm running 270x12). I think I'll lower the FSB to stock values and see if that changes anything.

Haven't got that problem in any other game or application so it might just be a software conflict instead.

have you tested that your OC is stable for your graphics card? (just out of curiosity)
When ever i OC my cards, I usually give it 30-45mins in furmark at maximum stress before I deem it stable and its always worked out fine for me

I am still looking for a better DirectX stability test (hopefully freeware).

And to make matters worse, that stability test needs to have ALL the DX9, DX10, and DX11 codes available for testing (including turning those off if not supported in the GPU).

jtleon

Click to expand...

After stress i load up a few games like BC2/BF3 or L4D. I rarely have an occasion where a OC is not stable enough for gaming after 30-45mins of furmark but its only a test of time anyway. But ive done the same thing with every GPU ive ever owned and ive only had it come up only once when i was testing a 5850 Xtreme - Stable in furmark but not so good

Keep in mind that many games are DX "lightweights" on purpose - to ensure the "mainstream" user can enjoy the game. For example, in Crysis 2 I can drop the "post processing" setting from Ultra to Extreme, and I can run a much higher overclock without crashes. However, the game looks so much better with the Ultra setting (debris flying everywhere - lots of birds, smoke, etc.). I prefer to have no limits on my overclock.

Also, I think this problem we are facing is the amount of overclock. Us AGP ppl have no more powerful card option than the HD3850 - so our only option is to maximize our overclock. Given that this GPU is factory set at 669mhz - by the time we break 800Mhz we are quickly approaching the 33% overclock mark. Consider all your OC experience - how many cards were stable at 33% overclock - with a factory heatsink?
jtleon

After stress i load up a few games like BC2/BF3 or L4D. I rarely have an occasion where a OC is not stable enough for gaming after 30-45mins of furmark but its only a test of time anyway. But ive done the same thing with every GPU ive ever owned and ive only had it come up only once when i was testing a 5850 Xtreme - Stable in furmark but not so good

LOL I had a X850XTPE that could overclock like a boss. though I never really took it past the ATi overdrive limits but they were set pretty high back then....I think i managed to drag the slider all the way to the right on the core. and still not suffer any instabilities, though I did have an Arctic Silencer on it.

good card, I still have it. unfortunately the board I paired up with failed to boot after 2 or 3 years of being in storage.

This power vs. voltage relationship is why TRWOV's card is so great - 824Mhz @ 0.974V is outstanding, as its VRM's are basically running cooler than the factory voltage (1.214V).

My current Sapphire card is running the factory 1.214V @ 796Mhz. Looking at the HIS card vs. the Sapphire card, I believe the Sapphire is using much cheaper VRM's. So far I have been unable to remove the Sapphire VRM heatsink.

Thus we must consider our VRM's performance limits. Does the factory VRM sinks also need a TIM upgrade? I seriously doubt the factory designed for a 33% overclock - at that time, such an overclock would push this card well ahead of the HD3870 in performance. That would hurt the sales of their flagship.

I suspect we need to improve the VRM cooling if we want better stability at this high OC. The HIS card has a much better VRM heatsink than the Sapphire (they are not interchangeable), even though the Sapphire sink is bigger (another reason the VRM's are cheaper i.e. bigger sinks are needed on less efficient VRM's).

The HIS VRM's are much higher quality (see attached photo). Sapphire and Power Color both use the same cheap VRM's

I ran a few CPU tests and I think that it's the CPU or my mobo's VRMs. If I go from a cold boot and run IBT standard it passes but after a few minutes of warm up if I run IBT again it crashes almost immediately. Maybe my VRMs can't supply stable power after getting warm or certain CPU logic goes beyond its operative thermal threshold.

I've lowered my CPU clocks to 270x11 (2.97Ghz) and it manages to complete 50 runs of IBT maximum. Sadly there is no 11.5 multi as I tested and the CPU can do 3.125Ghz (260x12 IBT stable). Crysis 2 is working fine (for now).

These benchmarks load the full DX9,10,or 11 features and push your GPU to its limits. They provide you with an FPS counter as well as many other features.

They run endlessly so you can run GPU-Z and watch your GPU temps go up, with fan % etc.

jteon

PS. This benchmark (Tropics) has crashed my machine today after running for about 30 minutes. It is very warm today (73°F) in my office. I think this benchmark should replace Furmark for stability testing.