I was at 4.5% Dupe now at about .4% for stale and dupe. Did you fix something on your end? Only thing I did was move the proxy to a faster computer.

Question. I have a friend with 2.5GHash who has his rig here at my location. He should run thru my proxy not start another on the same network, correct?

I do not understand the whole thing yet but seems like worker name and pass are just passed thru the proxy. So should be ok.

Your friend can send work through your proxy, yes. The proxy will relay it to the proper worker. I did make one change about dupes that may have been due to how the proxy incremented the work.

As for other dupes, so far I have yet to see a case of a dupe that wasn't a resubmission. Unfortunately, WHY shares are being resubmitted is a mystery. In every case so far, it has been the fault of cgminer. You can view in the logs that it sent in the share and it was accepted, and then it was sent again sometime later and rejected (because it was already submitted).

Works great once I remembered to closed bitcoin-qt 2 BFL singles running with 1300% E1 HD5870 390% E2 HD6870 590% E11 stales and 53 Dupes across the 5 devices using CGMiner 2.7.5I stopped and started the proxy once by mistake in the roughly 10-12 hrs its been running

Proxy server PC is an old Core2Duo E7400 2GB ram running vista 32bit that is mining with 2 BFL's and a 5870. GBE LAN, Comcast Cable ISP, with a DD-WRT flashed WRT310n router.

I has been testing stack cgminer+proxy+my server heavily and didnt had a single duplicated share. So far there might be two reasons for that: cgminer dont receive a response fast enough, so he decide to resubmit the share. Another chance is that extranonce1 given by the server is not unique for some reason (threading issue?). And there might be a bug in the proxy so the extranonce2 is not generated uniquely, but i doubt it as far as i know about 500k shares generated by cgminers on my pool without a single dup.

I has been testing stack cgminer+proxy+my server heavily and didnt had a single duplicated share. So far there might be two reasons for that: cgminer dont receive a response fast enough, so he decide to resubmit the share. Another chance is that extranonce1 given by the server is not unique for some reason (threading issue?). And there might be a bug in the proxy so the extranonce2 is not generated uniquely, but i doubt it as far as i know about 500k shares generated by cgminers on my pool without a single dup.

Hope this helps in debugging.

cgminer is receiving the response, we can see it quite clearly getting the share accepted, sometimes 20+ seconds before it resubmits the duplicate share.

The extranonce1 is absolutely unique, and duplicate shares are only checked against the submissions made over that connection (since sending it over a different connection would be a different extranonce1, thus being a completely different hash to validate). I've also made sure that there is never be a new work push that has the same coinbase as one previously sent (ie: if there were no new transactions on the network since last work push) by including the job_id inside the coinbase TX.

Hopefully somebody can submit a log of their proxy -and- cgminer at some point so we can clearly see the timing of:1) When cgminer submitted it to the proxy2) When the proxy received the response3) When cgminer received the response from the proxy4) When the proxy received the duplicate submission from cgminer

The same hardware does or doesnt produce dupes on old mining api? Can you check it? Hopefully yes,then it confirms the bug in cgminer and not on the proxy/pool side.

Work format produced by the proxy should be absolutely the same as from standard pool, so if we assume that cgminer is not broken (and it probably isnt *if* it doesnt produces stales on old pool), then there must be same work payload coming into the miner.

Doing one more restart to the pool server in about 5 minutes. Estimated restart time is about 2 seconds. You'll receive a batch of unknown work rejects immediately after the restart, then it should stabilize.

I'll also be doing a stats reset when this happens, so we can get a new look at acceptance rates.

UPDATE: Restart has happened, and the stats have been reset. Since the beta pool doesn't have a 'Reset Stats' button, all previously submitted shares were adjusted to be 1 less than the actual difficulty, and the stats filtered out that difficulty. This means your earnings are probably a few satoshis higher as a result.

Tested with 40 GHs (one 10GHs node and one 30Ghs node), unfortunately, for some reason, the 30GHs node bad-performed.I did install proxy but it was not able to upload more than 5-6 results per second (20 - 25Ghs) with alot of shares spending too much time in upload queue to the point where they would become obsolete after new job.This is not happening on direct connection with btc pool, upload queue is almost always empty.

Restarted the stratum pool again (keeping restarts at a minimum to avoid too much disruption of mining). Three new updates, one affecting miners:

1) Removed a fringe case where duplicate work may be provided after a longpoll-type work push.2) Added some extra memory tracking to try to identify a stray memory allocation that never gets removed.3) Added the "Mined by BTC Guild" tag to the coinbase that has been present on the normal pool servers.

I will be doing another stat reset in 5 minutes. The last stats looked great, users who were reporting dupes before had 0 or 1 dupes over the last 5 hours.

vphen: Proxy can ignore the request, but server can ban the connection if it will flood shares above the requested difficulty. It is not about protocol, but more about server implementation. As a stratum developer I'd like to ask you why you would want to ignore such command from the server?

Thank you everybody so far for helping test this new software out. We've found the 2nd block on the Stratum pool so far. I'll be doing another restart in a few minutes (It's been almost 24 hours, woohoo!).

No major changes for miners, but it does plug a small memory leak I didn't catch previously. It wasn't a huge problem, but it would have eventually become one.

On the https://www.btcguild.com/stratum_beta.php page, I just noticed the difficulty of each new row increasing by 1 and the PPS rate column changing accordingly - did I miss something about how difficulty works?