I'm pointing my measly 5g/h here now. So far I like the interface. Note to self, and others, if switching from a pool where the workers were user.miner to eclipse, change it to user_miner if you expect to see anything. Otherwise it happily mines away, presumably to never never land.

Q: are you keeping transaction fees? I assume you need something to pay for this, unless you have some happy donators.

What he probably means is that cgminers mhash is approx half, in my case from 320 to 160 MH.

For me seen also with cgminer 2.4.1 but directly against stratum proxy (tested both) 0.5.0 and 0.8.3, (against btcguild ). Only clue for me yet is that its a 6950, 3 other 5850 same setup, no problem. Maybe you also got a 6950 ?

Nevermind, too tired, had launched an earlier cgminer. Sorry for the fuzz.

I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?

I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?

I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?

Try 2.7.5 ...

On 2.7.5 now, I'm putting 2GH/s+ in (10x ZTEX singles) and it all looks good on the miner side (apart from a few rejected with high-hash, which is new to me). On EMC, however, the hash rate reported fluctuates between 1~1.4GH/s, avg diff is 1.088. I expected it to fluctuate a bit higher, obviously.

That said, how does that effect efficiency calculations going forward? Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?

I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.

Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.

I have an older cgminer (2.4.1) running on OpenWRT that was working great with EMC until last weekend (which I assume was when the var diff got turned on). Since then I get roughly 50% of my hashing power reported on the workers page.

I tried upgrading to the latest git version which rendered the exact same result (and random segfaults) so I moved back to my trusted version.

What am I missing here? var diff should work fine even with 2.4.1 if I understand it correctly, so what am I missing?

Try 2.7.5 ...

On 2.7.5 now, I'm putting 2GH/s+ in (10x ZTEX singles) and it all looks good on the miner side (apart from a few rejected with high-hash, which is new to me). On EMC, however, the hash rate reported fluctuates between 1~1.4GH/s, avg diff is 1.088. I expected it to fluctuate a bit higher, obviously.

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.

Too lucky? No, you're just using the wrong software

Sorry but don't understand, wrong software? measures are differents if bfgminer is used with GPU or FPGA?

That said, how does that effect efficiency calculations going forward? Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?

I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.

Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.

I'd argue efficiency isn't even a meaningful stat on Stratum. Pools sending you more job notifications aren't less efficient, they're actually MORE efficient (more frequent jobs = more current on transactions in the network).

That said, how does that effect efficiency calculations going forward? Stratum is effectively the same in that regard, so if you pull a template and send back getworks, how is CGminer going to calculate efficiency or does that just become a redundant metric at that point?

I haven't decided what to do with the efficiency metric. Either I'll make up something or just not use it.

Making it halfway through the stratum protocol, I've decided that each mining notify message will be counted as the equivalent of a getwork. Of course efficiency is increasingly becoming a figure that is of not much use to miners and pool ops alike, but perhaps a target efficiency will be the endpoint of tuning what variable diff to set it to.

I'd argue efficiency isn't even a meaningful stat on Stratum. Pools sending you more job notifications aren't less efficient, they're actually MORE efficient (more frequent jobs = more current on transactions in the network).

Indeed efficiency is already confusing enough in the light of rolltime and vardiff, and not even defined in any meaningful fashion for stratum. It looks like it might be time to retire it as a metric.

I mine with a 6770 and get 200 Mh/s so 1/10 your's but U: that means real shares submitted to pool is much higher, always get 2.6-2.7 shares/m. Can anyboy confirm it?, is a matter or GPU vs FPGA? or Am I too lucky?.