How much of that is down to the shares? If you created a pool accepting difficulty 100 shares then would your bandwidth requirements drop significantly?

Basically all of it. The website is doing virtually nothing compared to the poolserver. In the last 12 hours (time of my log rotation) I've had 43,000 hits to the website.. Most of them are API hits which are pretty small (couple of hundred bytes). I don't bother running detailed stats on it at the moment though.

In the same timeframe I've had 1,400,000 hits to the poolserver.A getwork request is about 600 bytes, and a submit work is about 40 bytes.

Interesting. I wonder why pools use such low difficulty for their shares then. With most pools people are submitting many more shares than they are receiving payments so the reason is certainly not related to variance. Higher difficulty shares would help with server resources which would allow pools to operate more cheaply too.

The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

The more you raise the difficulty miners solve at, the less granular you can be sure of their hashrate. Low hashrate miners may not even get to submit a share in between longpolls in extreme instances. I dare say you could safely raise it to 2 or 4 without too much trouble though.

Probably the reason most don't do it is pushpool is set to 1 by default and few pool ops are skilled enough to change this without causing all manner of side effects. Personally, my load and network usage are well within acceptable parameters so raising it would just cause loss of share granularity for no real gain.

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though. There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail. Low difficulty shares can be helpful for people trying to measure stales too. Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

Given the low fee that most pools ask for I might expect that a pool server would have to watch it's BTC/Watt in a similar way to a miner so surely there is incentive for making the server's more efficient.

Ah well, this is just curiosity. I don't run a pool server nor do I intend to start.

Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

I suppose mostly that's psychological. A user likes to see that the pool knows their approximate hashrate, its the fastest way to confirm that their setup is working properly and their shares are being accounted for. My users tend to use the hashrate estimate as their go-to stat - if its more than 20% out (acceptable variance given the way we work it out), they know something is wrong and can investigate further.

If a user has to go more than 10 minutes without submitting a single share it would be very difficult for us to work out this figure to any acceptable estimate, over a reasonably short timeframe.

That's very true. I was used to this effect from solo mining and I admit it was much easier to be sure everything was working when you could see shares rolling.

Still, if a server is having resource issues then dropping to difficulty-2 shares seems like a much better idea than renting/buying a second server.

The last quote I had, I fell off the chair , they were pricing per 50 GB blocks of bandwidth. I think there is a difference in USA bandwidth, if I remember correctly Northern USA IDCs are faster to Australia than the rest. But do correct me, old folks do not have good memories . BTW are you planning merged mining? Was about to sign up with you yesterday, but found no merged mining.

A getwork request is about 600 bytes, and a submit work is about 40 bytes.

submit work should be a lot more than 40 bytes... Unless you mean outbound...

Getwork could be reduced to about 40 bytes + maybe another 40 for tcp overhead with a proper differential binary protocol. 80/640 = 88% reduction. I'm thinking of working on that as part of the next phase of poolserverj development but I'd be interested to hear if bandwidth costs really are an issue for pool ops... If so just have to hope some miner devs will step up and implement the client side of it.

basically first request contains all the stuff in a normal getwork (though in binary) except midstate which is redundant. Subsequent requests only contain a new merkle root and timestamp. These are the only fields that actually change except at longpoll time.

I certainly understand if it's not an easy parameter to change and the servers can take it anyway.

I don't see how share granularity is much of a plus though. There is a bonus in that the pool can quickly detect when a user has stopped mining and send out a cautionary e-mail. Low difficulty shares can be helpful for people trying to measure stales too. Other than that I don't see the problem with submitting no shares between longpolls or why the pool needs to know user's hashrates.

One thing to consider is that higher share difficulty punishes smaller miners.

Pools don't pay for partial shares so there is already an advantage to having higher hashrate miner.

Difficulty 1 = ~4 billion hashes.

10 minutes per block change means @ 100MH/s on average a miner will complete 12 shares per block change.

@ 400MH/s on average a miner will complete 60 shares per block change.

The reality is the miner technically has completed some fraction of shares which are lost in the block change. However for the slower miner it is a larger % of their aggregate output.

The reality is the miner technically has completed some fraction of shares which are lost in the block change.

There is no such thing as a partially completed share.

Exactly that is the point however when a block changes a miner will have "wasted" any hashes being worked.

Maybe my wording is unclear but since hashes are an artificial measurement there is some "loss" which means a lower throughput miner takes MORE shares on average to achieve the same number of shares as higher throughput average.

This is because the pool only sees work in "full share" steps. If shares were smaller it would be less of an effect and if shares were larger it would be more of an effect.

Currently a 400MH GPU outperforms 2x 200MH GPU in terms of shares earned by ~3% and outperforms 4x 100MH GPU by about 5%. I know because I experimented by downclocking GPU to simulate slower GPU and running them for nearly a week to compare shares vs hashrate.