Personally I think it doesn't matter though; once you put a waterblock on there all you need is a water temp sensor. I am going to either get one and screw it into a free port, then use that to power the fan, or use the PWM jack on the temp control and control both the fans and the pump speed from a single water temp thermocouple. I might also just tape the thermocouple to one of the block fittings and call it good; I didn't put any drain fittings into the system so I don't plan to take it apart any time soon!

I could use a fan controller to take the fans down a notch; so far the hottest I have seen on the gpu block is about 45 degrees in double precision fullscreen nbody simulation with the fans full tilt and the pump doing whatever the ASUS board tells it to from a PWM jack. Right now I have the CPU fully loaded on eight threads of BOINC and another BOINC thread running on the Tesla and I can't get it over 35C.

So I might just put a resistor in there and give the fans 7 volts instead of 12; the kit I bought has the resistors prewired for 7 and 5 volts on separate harnesses to plug and play. Pretty nice.

Thanks again everybody. Sorry I don't have a pic of the block on the card before it went in, but that part is really easy.

PS up is to the left on the photo

Quote:

Originally Posted by Tharic-Nar

This is part of the problem atypicalguy had. Although he was in Linux, he couldn't get any sensor information that correlated with the Tesla. This may be either a driver problem or it just doesn't have a sensor. Only thing that could be done was to attach a sensor to a fan controller or something, that's stuck to the underside of heatsink/block.

Not bad, the 240 should be enough for the most part, just don't go adding things to the loop lol The rule is basically, 120mm of a good thick rad per component, none of that "slim rad" junk. After that its fit as much rad as possible lol That's why I have a 360 and a 280, both of which need new fans though.

I just put "Core Temp" on my windows desktop. This morning with about 22deg ambient the highest core is at 77 deg C with all eight cores at 3.6Mhz 100% all night and the Tesla grinding away at 40 deg C on my poor man's temp sensor.

So there is a big difference btw what software reports from sensors embedded in the CPU chip and what a thermocouple on the GPU heat spreader just outside the waterblock report for component temps.

From what I understand the CPU chip is a lot more dense, so the heat is more concentrated and it responds better to increased flow rate, which is why you always want it in the main flow line. This is from Swiftech's site technical papers.

But I also suspect the actual GPU chip is pretty hot and I am just measuring the heat sink outside of the center, so lots of heat has been taken out by the water block before it ever gets to the thermocouple.

For now there is just no known way to get a good heat reading on the GPU chip itself. But if the cooling were inadequate the temp would equalize across the heat spreader and I would see higher readings. I think a thermocouple on the CPU spreader would probably show a lot less than 77 also.

If safety is the main concern, then the CPU temp would appear to dominate the system limits (I hope). I have intentionally loaded it all up and put a temp limit on the CPU monitor to shut the computer down if it hits 90 deg on the CPU core during the day today. There is just no good way to compare the temp values from the two components directly the way I am measuring them.

Nice atyp. My experience with the C2070 is that it just locks up when it gets too warm. I managed to do that a few times when I was setting the OC settings.

I have never managed to control the fan speeds on the 2 systems I have with CPU watercooled. My systems are push/pull fans on the radiators but they are not noisy ... there is some noticeable fan noise but it is more of a whisper and everything is in dedicated office space. I had considered fan speed control just to reduce dirt accumulation in the radiators, but the radiators are much easier to blow out than the air cooled heatsinks for CPUs. And, I put mesh filters on the fans so dirt accumulation is pretty minimal anyway. This all caused me to lose interest in any sort of fan speed control.

I used a pyrometer all over my systems during setup. With high water flow the temp is quite uniform. Some argue that more heat is removed if the water flow is slow as it is moved thru the radiator ... good argument. But the water is also slow thru the water block & absorbs more heat.

I decided not to mess with any more gadgets & controllers & just let it pump. My CPU OCs are about as good as they get & they can run for days 100% maxxed. Running from 4.2 GHz to 4.5 GHz so another 100 to 200 MHz is not interesting.

And, the OC for the C2070 ... the GPU is set to the maximum setting of the software I am using. I think it is the memory clock that had to be backed down a bit for stability. I also do not think the issue is temperature related although if some sort of peltier device was used higher clocks might be had .... ehhh! I made a thread on that someplace in this forum.

OK thanks for the reassurance. I am definitely on the "let it roll" plan today also.

Maybe I need some quieter fans; the ones with the kit looked pretty basic.

Just checking my home machine now- - so far the CPU is hovering around 80 fully cranked up; for some reason it posted an 84 max on core 3 earlier today but it is down now. I had it backing the hard drive up so maybe that was the issue.

I can probably do better with airflow. Fans are pulling down into the case to be sure the rad sees the coolest air possible, but inside the case is now warmer. I had to take off the rear case exhaust fan to fit the radiator in; maybe I will put it on the outside with a grate to keep little fingers out =:-) Should help get the warm air out and potentially increase the flow across the radiator as well.

So DarkStarr was pretty spot-on: the two place 120mm radiator is just enough for this job. The card draws up to 250W per NVIDIA documentation and the CPU draws ~70 full tilt so that is a fair amount of energy to dissipate, assuming most of it goes into heat (? Surely some of it goes to increasing the order of the electrons in the CPU, but I have no idea how much. Hmmm).

I can see why people go with big external radiators and large fans. I just want it all inside the case to keep my kids away from it.

Quote:

Originally Posted by Psi*

Nice atyp. My experience with the C2070 is that it just locks up when it gets too warm. I managed to do that a few times when I was setting the OC settings.

I have never managed to control the fan speeds on the 2 systems I have with CPU watercooled. My systems are push/pull fans on the radiators but they are not noisy ... there is some noticeable fan noise but it is more of a whisper and everything is in dedicated office space. I had considered fan speed control just to reduce dirt accumulation in the radiators, but the radiators are much easier to blow out than the air cooled heatsinks for CPUs. And, I put mesh filters on the fans so dirt accumulation is pretty minimal anyway. This all caused me to lose interest in any sort of fan speed control.

I used a pyrometer all over my systems during setup. With high water flow the temp is quite uniform. Some argue that more heat is removed if the water flow is slow as it is moved thru the radiator ... good argument. But the water is also slow thru the water block & absorbs more heat.

I decided not to mess with any more gadgets & controllers & just let it pump. My CPU OCs are about as good as they get & they can run for days 100% maxxed. Running from 4.2 GHz to 4.5 GHz so another 100 to 200 MHz is not interesting.

And, the OC for the C2070 ... the GPU is set to the maximum setting of the software I am using. I think it is the memory clock that had to be backed down a bit for stability. I also do not think the issue is temperature related although if some sort of peltier device was used higher clocks might be had .... ehhh! I made a thread on that someplace in this forum.

Finding fans can be more more challenging than it should be. I check newegg (which has a great return policy) and have filtered all other WC sites down to mostly Performance PCs. There is always jab-tech & frozencpu which come up. Performance PCs has always shipped the same day.

Also, the Zalmon VF1000 I just received was missing 1 compression spring ... received 3 of 4. I contacted Performance, they contacted their supplier, who in turn emailed back copying me also & indicating that a spring was going out priority UPS. All of that transpired in the same day ... hard to ask for much more than that.

For radiators, thick fans are best. I have have the low speed version of these Fesers. According to Performance (I asked them) those low speed fans are not available any longer as "the company is difficult to work with". The hi speed fans in that link would be much too loud for any normal environment. 65 dBa!!!!! They also offer Pabst fans and are much quieter. I will admit that I have forgotten how loud might be too loud. Those Pabst at 39 dB still seem a little too loud tho and I do not believe that you need that kind of air flow ... I would stay below that noise level & is why I used push/pull.

Because I have the push/pull which is overkill I added these filters just to keep the dirt out.

For fans, there are three choices that I'm aware of if you want to push air through the rad, rather than pull. These fans are focused with a high static pressure. Their CFM will appear lower, but are more effective at forcing air through tight spaces.

I am getting Bitfenix Spectre Pros I need some 140s and 120s. If you want to go cheap then grab several CM R4 fans, they work great just don't expect them to last forever. I had 6 of them on my rad push/pull and got better temps with 2 deltas at 7v but the deltas were sooooo loud. That's why I added in the extra rad, so I can have slower fans and bring in less dust.

The Zalman VF1000 keeps the M2090 ... uh ... working ok. As discussed there is no way to know the GPU temp like there is with the C2070.

I have compared the speed of the M2090 to the C2070 (overclocked I think) & the M2090 is faster. Almost twice as fast. But, the M2090 is in a faster system; the M2090 is in a i7-990X versus the C2070 in a i7-920. Both CPU are OC-ed to well over 4 GHz, but a utility I have that came with a commercial number-crunch software shows the PCIe bus in the i7-990X is almost 10X faster than the i7-920. Mother boards are identical.

When there is a lull in the current crunching I will upload a few screen shots. Still, the C2070 is substantially faster than the i7-990x crunching by itself which had been the comparison. And, I must def find the time to drop in the extra i7-990x.

I ended up going push pull on the two fan radiator with a couple silent tornados (cant remember the name but the ones that get good reviews 1450rpm push and one of the swiftech fans in pull. The pull is thermostat switched along with a bunch of case exhaust fans with the temp sensor on the gpu. The push fans are plugged into the motherboard so they run continuously. Makes a pretty quiet machine unless it is crunching.

I would put the temp sensor on the cpu next time and put the cpu first in the loop. Maybe go with two pumps and a bigger case with triple radiator. But what I have will keep it running even on hot days, so it is just enough. Cpu temp is limiting factor and it it motherboard protected so it just powers down if too hot. Cpu runs 81-82 deg with everythink cranking in hot weather.

.
.
.
Cpu temp is limiting factor and it it motherboard protected so it just powers down if too hot. Cpu runs 81-82 deg with everythink cranking in hot weather.

Assuming that is degree C, that is pretty hot but I have run my i7-920 like that for days. It was in the system for a couple of years. There is an old thread here where Rob suggested i go with that initially then upgrade later. Later was finally this morning. I now have a i7-990X as the host for C2070 with similar speeds as the system hosting the M2090.

I use Indigo Extreme as the TIM which required running Prime95 on all cores with the water pump turned off. The CPU maxxed for several seconds ... impressive.That is 100 deg C. I used it on the i7-920 just pulled & on the other i7-990X. Pretty sure water was starting to boil in the CPU water block. Because when I turned the water pump back on a burst of water with fine bubbles came out. We need a funny face with sweat. Like I sad tho, I have done this twice before.

The M2090 + system is faster tho. Still looking into that. These systems are quite similar. Same m/b, same CPUs, same RAM, etc. ... so manufacturing differences could be root cause of the difference performance, but still checking things. Both boxes are running now & that is all that counts at the moment.

Last, Prime95 is now available as 64 bit & can automagically spawn threads on all available cores. Humbling seeing 12 threads fully pegged in Task Manager.

Regarding monitoring the temperatures on these cards, NVIDIA told me that the "nvsmi" tool has existed for a little bit, and I don't remember that being mentioned here. I was told that the tool was only for Linux but it does seem to be available for Windows as well:

nvidia-smi.exe, the actual application, was installed with the Windows Nvidia driver. I have used it on both the C2070 and the M2090 systems to compare what can be reported. As a result, I am pretty confident that the M2090 does not report temperature. Percent of memory usage is reported which is especially important to know when the project exceeds the memory of the GPGPU.

That's a bit strange, on account that it explicitly states "GPU Monitoring" on that page. It came about in the discussion of the K-series... I asked if the new cards could have their temperatures monitored, since the older ones could not. Then I was told about smi, and that yes, in fact, you could monitor the temperatures on previous cards.

Now, as I mentioned above, I was told that smi was for Linux-only, but possibly I misunderstood. I'm wondering if temperature monitoring, for some reason, is Linux-only, and that's what he meant.