...D. Diff falls in the round-trip window period as cgminer has discarded a share that would now be adequate as cgminer is not aware of the new diff...Scenario D:cgminer can't do anything about it, and this work is lost. Any decrease of diff by a pool can elicit this, and the more often the difficulty is lowered, the more shares are lost. No rejects are induced.

In fact there is something that may be done and it would prevent any share being lost if the difficulty changes are bounded.Keep shares from the last seconds (this window should probably be a tunable) that satisfy difficulty >= target / max_retarget_factor where max_retarget_factor is either a constant, user-parameter, provided by pool, ...If there is a decrease in diff, submit the shares that satisfy it.

ckolivas, maybe you can now simplify the vardiff code by storing currently known difficulty with the job which has been notified by the server? Practically it may work as if difficulty is a part of the notify message itself. This will still work with existing pools, code for managing it is much simpler and solves some uncertainity in edge cases...

ckolivas, maybe you can now simplify the vardiff code by storing currently known difficulty with the job which has been notified by the server? Practically it may work as if difficulty is a part of the notify message itself. This will still work with existing pools, code for managing it is much simpler and solves some uncertainity in edge cases...

There are 4 distinctly different scenarios with changing diff separate from work.A. Diff rises before cgminer submits lower diff share and cgminer is aware of itB. Diff falls before cgminer submits now adequate diff share and cgminer is aware of itC. Diff rises in the round-trip window period as cgminer has already submitted a share below difficulty and cgminer is not aware of the new diffD. Diff falls in the round-trip window period as cgminer has discarded a share that would now be adequate as cgminer is not aware of the new diff

If I understand it correctly, these scenarios cannot happen after the change in set_difficulty meaning, because difficulty is always received before the job (there cannot be any race condition because both messages are pipelined over the same socket). Am I correct?

That say, I feel that there's some misunderstanding and I'd like to clarify that we all work with the same information.

Ok... I think maybe part of the fundamental problem here is that Slush's view is that all work sent out to a miner is related to work that has come before it (and work that will be sent after it), whereas, at least in my view, work that is sent out is discrete; it's completely immaterial what's come before or what is going to come after. Correct me if I'm wrong here, but it seems that this is the root of the issue and why difficulty is not tied to work in Stratum.

Maybe if you (Slush) can explain why a work packet being sent to a miner needs to be related to another work packet (either being sent to the same miner or someone else) as opposed to being discrete? I'm just not seeing why work at difficulty X needs to be related to work at difficulty X + 1. The miner is assigned work at difficulty X. If they do the work, they should be rewarded for that work, regardless of the fact that difficulty changed since the work was sent out... they still did the work! So rejecting the work because the pool arbitrarily decided to change the difficulty on the miner is rude if not outright dishonest.

Rejecting shares because difficulty changes AFTER the fact could easily be exploited by a malicious pool and amount to a very, very significant income stream at a miners expense. For example, say I setup a malicious pool... every 10th share, I code the pool to raise the difficulty (then reduce it a short while later), thereby "rejecting" 1 out of every 10 shares that miner sends back... but I keep those shares and assign them to another user or the pool itself, etc... because those shares are still valid. Now the pool gets 10% of the shares a user sends, the user thinks "crap, I'm getting 10% rejects because of my hashrate" or whatever reason you want to assign to it and it looks totally legitimate. Now reduce that to .1% and spread that across 100 TH and 2000 miners... the miners won't notice such a small amount, but now the pool op gets the equivalent 100 GH/s for free and no one is the wiser. This is what the Stratum protocol allows as designed, and if I'm wrong about this please let me know and explain why this can't be done? This is also why difficulty needs to be tied to the work, so that work performed is never rejected because the pool decided to change the rules after the fact.

A more extreme example would be: Pool keeps increasing the difficulty every set interval, causing all shares to be discarded by a miner until a valid block share is found. Now that miner gets credit for 1 share, even though they should have sent 20,000 shares. Yeah, this is unrealistic, but it illustrates the point that it's possible to exploit the protocol to the detriment of the miner.

I will be happy if I'm wrong and if someone can point out where I'm mistaken. This is really my only major problem with Stratum at the moment, everything else is fairly minor in comparison. I apologize if my tone appears aggressive towards you, Slush. It was not intended to be, beyond my minor irritation with the way Stratum was rolled out... and that irritation is minor and not worth complaining about. My "bashing" of Stratum is because the problem has been beat to death and you basically ignored anyone who disagreed with the work/difficulty being uncoupled. Regardless, moving on to the solution you mentioned:

How does it address the following scenario:

Miner is given work at difficulty 2. Miner begins processing work at difficulty 2 and finds a valid share... while processing the work Stratum sends mining.set_difficulty 3.

What happens to that share found at difficulty 2, which was valid at the time it was issued? The miner did the work based on the rules it was given, but the rules got changed after it already agreed to the work.

If you're searching these lines for a point, you've probably missed it. There was never anything there in the first place.

Miner is given work at difficulty 2. Miner begins processing work at difficulty 2 and finds a valid share... while processing the work Stratum sends mining.set_difficulty 3.

What happens to that share found at difficulty 2, which was valid at the time it was issued? The miner did the work based on the rules it was given, but the rules got changed after it already agreed to the work.

What exactly is unclear on the sentence (from stratum docs):

Quote

This means that difficulty 2 will be applied to every next job received from the server.

If pool sends new difficulty, but then receive the solution from previous job, user must be credited for previous difficulty.

Edit: Although I don't fully understand these scenarios described by you, I think understanding of ^^ will give you the answer to some points. Also, if I understand some of your questions, Stratum proposal doesn't target scenarios with poolop cheating with counting shares.

Ok, if that's the case then that solves the issue. Conman, does that resolve the rapid difficulty change issue with EMC then if that particular method is implemented to handle difficulty from the Stratum protocol side of things?

Quote

Edit: Although I don't fully understand these scenarios described by you, I think understanding of ^^ will give you the answer to some points. Also, if I understand some of your questions, Stratum proposal doesn't target scenarios with poolop cheating with counting shares.

It needs to... not to keep pool ops honest so much as to allow miners to verify that what they THINK is happening is really happening. It's about empowering the miners, not policing the pool ops.

Quote

No, this still does not address D.

Is this relevant though? I'm not sure I'm comfortable with this, as it flips the exploit over to the miners side (although it has much less of an impact). I don't want to issue difficulty 32 shares, have them reduce their hashrate while they save up a bunch of shares until their difficulty reduces then submit them all as valid. Again, we go back to difficulty being tied to the work sent out. Allowing it to "leak" out to other work/difficulty relationships is not the correct way to go about it IMHO.

If you're searching these lines for a point, you've probably missed it. There was never anything there in the first place.

Ok, if that's the case then that solves the issue. Conman, does that resolve the rapid difficulty change issue with EMC then if that particular method is implemented to handle difficulty from the Stratum protocol side of things?

For scenario A, if the old diff applies to work given out prior to the diff change it's almost the same as tying it with the work, and is how cgminer currently manages difficulty already for rising diff. Scenario C will sort itself out as well. So this works for me.

What about if diff drops as in scenario B? Do you credit work to the new difficulty or does the old difficulty still apply?

There is still the potential for lost opportunity in scenario D, and this would be no different if difficulty is tied with work or not, with gbt or stratum, as every window lost is the time of a one way trip from the server to the miner. While small, if done often it would start to add up. Thus it would still be prudent to minimise the diff changes to avoid this.

For scenario B, the old difficulty would still apply to work for that discrete issue of work. I definitely do not want to create a potential scenario where it could be exploited by forcing a difficulty change. One way can be exploited by pool ops (not accepting work once difficulty changed), one way can be exploited by miners (accepting work for higher difficulty work as valid once work drops) and one way (tying difficulty to work) can not be exploited by either, and this seems the most sane way to handle it.

If you're searching these lines for a point, you've probably missed it. There was never anything there in the first place.

What about if diff drops as in scenario B? Do you credit work to the new difficulty or does the old difficulty still apply?

I think for scenario B, the pool must either a) credit the miner the shares at the new difficulty; or b) push cleanjobs=true. There is an advantage for the pool op in pushing cleanjobs=true, but the visible result will be in stale rejects.

Noticed a share rejection problem with 2.9.5. I removed failover-only from my config because share leakage is lower, only ~0.01% of all accepted shares are sent to backup pools now. My setup on all rigs is the same, CoinLab as the main pool, BitMinter as first backup and Eclipse as last resort. All the accepted leaked shares go to BitMinter, and that's no problem anymore because they are so rare. However, my miners only gather rejects on Eclipse, about 10 rejects for every share leaked to BitMinter. No shares have been accepted on Eclipse so far, and one rig has already marked it as Rejecting.

If I also account for rejected leaked shares, the total amount is still only ~0.1%, so it doesn't matter much. But this definitely isn't behaving correctly now. I can help testing this if it's not obvious what the problem could be.