- TDP I am not sure how to directly influence TDP and where we read the actual values for the input field.

Thermal Design Power, Power Target, Power Limit and so on is basically the same. If a card's TDP is rated at 200W and you set the the TDP/Power Target/Power Limit/etc to 110%, then the card will draw a maximum of 220 watts. You can change it in pretty much every overclocking software (EVGA Precision, MSI Afterburner, Gigabyte OC Guru, NVIDIA Inspector, etc)

- Power Consumption You should define what to fill in there. It is hard to compare different PC-Systems and with a simple consumption measurement device we only get the total consumption of the whole PC-System. I suggest to define the Power Consumption as the increase of consumption when cudaminer is working. PowerConsumption_additionalForMining := PowerConsumption_mining - PowerConsumption_idle

I know my idea is not that great, and I thought about the same thing you pointed out, but what if GPU/CPU/Fans are idle or not idle when there is no mining going on? Or what if the CPU and CPU fans inreasing the power consumption when you have cudaminer set to offload SHA256 to the CPU?

Basically I can't think of a good solution to filter only the GPU's power consumption.

- Frequenciesclock speeds of the graphic cards of the same model differ between manufacturers. It would be easier to evaluate absolute values instead of adjustments to an unknown base. Some tools show effective memory clock speeds much higher than the real ones which are usually listed as a part of the vendor information. It should be clear what to insert in the sheet. Personally I am using Firestorm, so I vote for using the real values

The problem with Keplers is that they dynamically changing the frequencies to support TDP and temperatures.That's why you can see fluctuating core clock speeds if you're reaching your TDP limit. That's why they are so difficult to properly overclock on stock BIOS.

For example:If I overclock only the core clock on my GTX 660 from let's say 1098 Mhz to 1180 Mhz (let's just forget boost for simplicity) without touching anything, my clock will jump around 1137-1150 Mhz instead of 1180 Mhz based on fan speed. Yes, fan speed, because the fans need more power with more RPM (duh) and fan speeds are prioritized over clock speeds for obvious reasons and the card has a limited amount of power it can drain which is the TDP limit, so essentially it will downclock the core to power the fan(s). It's not a temperature throttling, you can set manual fan speeds with low temps, to see the same results.So basically the core clock, the memory clock and the fan speed, they all have to fit under the TDP limit.

So getting back to my example, if I set my fan speed to a fixed 40%, my clock will be at 1150 Mhz, BUT if I downclock my memory from 6010 Mhz (effective) to 4010 Mhz, the memory will drain slightly less power which means the card can give more power to the GPU, meaning the clocks will be closer to my overclock target, which was in my case 1167 Mhz. (Interestingly, a -2000 Mhz memory downclock caused the hashrate to only drop 202 Kh/s from 208 KH/s in a short benchmark, but more about that later). Lower fan speeds and obviously higher TDP could also help getting closer to a targeted core overclock. Bottom line is that Kepler changes stuff dynamically so offset values makes more sense to me because working with offsets makes it easier to reproduce certain scenarios (sweetspots) while getting to a certain fixed clock speed is hard and it can be done in different ways.

Anyway, all of that is just my personal preference, obviously I'll change things around on demand and I'm planning on giving priviledges to the survey+sheet to some people.

The -C 1 switch only works with the version on github. I got 283 kHash/s before, and 292 kHash after adding back the texture cache feature.The github version also has a more efficient -H 2 (SHA256 hashing on the GPU) feature. That may also make a small difference.

I might release another cudaminer version early next year.

Christian

First off, you are awesome and second, I can't wait for the new version since I've spent way more time than I care to admit trying to compile the github version, but I couldn't.

Just curious how do I compile the Github version of cudaminer on windows. Also I see Cuda 6.0 is coming soon with a few improvements will you implement cuda 6.0 into your programming when its available?

if you're still lurking here and if you're interested in scrypt-jane (ChaCha 20/8, Kekkac) for YaCoin and other clones.

maybe you can figure out what is wrong in my implementation of chacha_xor_core() in kepler_kernel.cu. I pretty much tried to do it in full analogy to your existing salsa_xor_core() routine. But something is amiss. The code sits in the github repo.

I would get 1.93 kHash/s in scrypt-jane hashing with the Kepler kernel on my GT 750M card, but the results won't validate (slightly more than 2GB of video RAM required).

NOTE: the most power efficient cards for scrypt-jane are those with few shaders (Fermi: 96 shaders like older GT 630 models), Kepler: 384 shaders like the GT 640) , equipped with lots of GDDR3 memory at 128 bit bus bandwidth or GDDR5 RAM (2GB and more) .

if you're still lurking here and if you're interested in scrypt-jane (ChaCha 20/8, Kekkac) for YaCoin and other clones.

maybe you can figure out what is wrong in my implementation of chacha_xor_core() in kepler_kernel.cu. I pretty much tried to do it in full analogy to your existing salsa_xor_core() routine. But something is amiss. The code sits in the github repo.

I would get 1.93 kHash/s in scrypt-jane hashing with the Kepler kernel on my GT 750M card, but the results won't validate (slightly more than 2GB of video RAM required)