What's responsible for P2Pool's current run of bad luck? Currently, it's been 29 hours (and counting) since the pool last found a block. The last three blocks (excluding the orphan) also took longer than expected...in one case, more than 2.5x longer. Is this a sign of something going wrong, an indication of an anomaly with the larger Bitcoin network, or is it just some unusually bad luck that we're stuck riding out?

What's responsible for P2Pool's current run of bad luck? Currently, it's been 29 hours (and counting) since the pool last found a block. The last three blocks (excluding the orphan) also took longer than expected...in one case, more than 2.5x longer. Is this a sign of something going wrong, an indication of an anomaly with the larger Bitcoin network, or is it just some unusually bad luck that we're stuck riding out?

My theory, based on experience, is when the hashrate gets above 300gh, things go south. I consider it a scaling problem. Others seem to disagree.

Eventually what happens is enough people leave because of the bad run that it drops below 300gh, then things gets better. Thing is, they get too good, then people come back and it goes back up above 300gh.

M

I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!

My theory, based on experience, is when the hashrate gets above 300gh, things go south. I consider it a scaling problem. Others seem to disagree.

Eventually what happens is enough people leave because of the bad run that it drops below 300gh, then things gets better. Thing is, they get too good, then people come back and it goes back up above 300gh.

I can't conceive of a scenario where a scaling problem could be responsible for bad luck. Scaling problems causing more stale shares or more orphaned blocks? Sure. Scaling problems making math stop working? How, exactly could the size of the network have an impact on the likelihood that random numbers are below a defined target?

Just because I can't conceive of it, doesn't necessarily mean it isn't possible, but given an understanding of how this all works, it seems extremely unlikely to be related to scaling.

My theory, based on experience, is when the hashrate gets above 300gh, things go south. I consider it a scaling problem. Others seem to disagree.

Eventually what happens is enough people leave because of the bad run that it drops below 300gh, then things gets better. Thing is, they get too good, then people come back and it goes back up above 300gh.

I can't conceive of a scenario where a scaling problem could be responsible for bad luck. Scaling problems causing more stale shares or more orphaned blocks? Sure. Scaling problems making math stop working? How, exactly could the size of the network have an impact on the likelihood that random numbers are below a defined target?

Just because I can't conceive of it, doesn't necessarily mean it isn't possible, but given an understanding of how this all works, it seems extremely unlikely to be related to scaling.

I'm stating from what I've observed. Every time the pool hash rate goes > 300gh, things go south. When it drops back down, things go better. It could be coincidence that it happens every single time I guess.

When we're below 300gh, yes, we have bad runs, but the good runs always outnumber the bad. When we get above, the trend reverses.

<shrug>

M

I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!

-w tells p2pool what port to listen for miners on, not what port to connect to.

try --bitcoind-rpc-port 10332 --bitcoind-p2p-port 10333

I'm just looking at main.py. I don't know if specifying the --net litecoin option changes the names of those two options or not. It doesn't appear to.

it runs successfully!

If you have correct litecoin.confrun_p2pool --net litecoinshould be enough to run p2pool properly. It reads data (ports, user, password) from config file.Also remember to restart litecond after you change config file.

LAN and WAN miners has deepbit as backup pool, when DOA is go to 100% I am manually close port, thenthey automatically switch to deepbit and working with DOA 0.45%

as I can see p2pool and deepbit is not compatible in multipool cgminer sheme if cgminer goes from p2pool (main) to deepbit (backup), and going back to p2pool, longpooling stop functioning and all work going rejected

LAN and WAN miners has deepbit as backup pool, when DOA is go to 100% I am manually close port, thenthey automatically switch to deepbit and working with DOA 0.45%

as I can see p2pool and deepbit is not compatible in multipool cgminer sheme if cgminer goes from p2pool (main) to deepbit (backup), and going back to p2pool, longpooling stop functioning and all work going rejected

everything, bitcoind, p2pool, cgminer is last versions

Correct. You can not mix p2pool and non-p2pool entries in the same cgminer instance.

P2Pool, the distributed network itself, is essentially never "down". If you want a backup to your personal p2pool node being down, use someone else's public p2pool node as a backup (there is a thread here somewhere with a list of public p2pool servers) and specify your bitcoin address as the username.

Correct. You can not mix p2pool and non-p2pool entries in the same cgminer instance.

P2Pool, the distributed network itself, is essentially never "down". If you want a backup to your personal p2pool node being down, use someone else's public p2pool node as a backup (there is a thread here somewhere with a list of public p2pool servers) and specify your bitcoin address as the username.

Um.. I have p2pool and ozcoin in my config, and I switch from one to the other without a problem (usually manually, as both have always been up as long as I've been using both). Yeah, a get a few rejects before it synchs up, but it always works.

Maybe you mean something other than "can not mix". Like load balancing maybe?

M

I mine at Kano's Pool because it pays the best and is completely transparent! Come join me!

Correct. You can not mix p2pool and non-p2pool entries in the same cgminer instance.

P2Pool, the distributed network itself, is essentially never "down". If you want a backup to your personal p2pool node being down, use someone else's public p2pool node as a backup (there is a thread here somewhere with a list of public p2pool servers) and specify your bitcoin address as the username.

Um.. I have p2pool and ozcoin in my config, and I switch from one to the other without a problem (usually manually, as both have always been up as long as I've been using both). Yeah, a get a few rejects before it synchs up, but it always works.

Maybe you mean something other than "can not mix". Like load balancing maybe?

M

You must ONLY be mining normal BTC or P2Pool at the same time.Thus any sort of load balancing must only be the same type of pool as the "Priority 0" poolYou also need to use --failover-only or disable all pools that could be 'failed over' to, that are a different type.

Switching may or may not work as expected - YMMV

But think of it in terms of:At any time, you MUST disable getting work from ALL pools that are not the same as the "Priority 0" pool

Pool: https://kano.is Here on Bitcointalk: Forum BTC: 1KanoPb8cKYqNrswjaA8cRDk4FAS9eDMLUFreeNode IRC: irc.freenode.net channel #kano.isMajority developer of the ckpool codeHelp keep Bitcoin secure by mining on pools with full block verification on all blocks - and NO empty blocks!

Um.. I have p2pool and ozcoin in my config, and I switch from one to the other without a problem (usually manually, as both have always been up as long as I've been using both). Yeah, a get a few rejects before it synchs up, but it always works......

I can't conceive of a scenario where a scaling problem could be responsible for bad luck. Scaling problems causing more stale shares or more orphaned blocks? Sure. Scaling problems making math stop working? How, exactly could the size of the network have an impact on the likelihood that random numbers are below a defined target?

The only thing that would affect scaling would be some large miners purposefully mining on p2pool and witholding the blocks. That would be very costly to do so, but it is possible.