Peter is looking at the system as a black box and analyzing its emergent properties. Gmax is looking inside the box. We need to resolve the contradiction.

Thanks for pointing out the difference between the black-box approach and "looking inside the box." I think the black box approach is useful because it forces us to consider the actual behaviour of the complete system, and then ask if our understanding of the inner pieces make sense given our observations.

The other thing is that in my model, τ, is more properly the time delay between when the miner has enough information to begin mining on the previous blockheader, and when he has fully verified the previous block, created a new non-empty block to mine on, and sent the block template for that non-empty block to his hash power to begin working on it. Gmax was only considering one component of this time delay.

Thanks for pointing out the difference between the black-box approach and "looking inside the box." I think the black box approach is useful because it forces us to consider the actual behaviour of the complete system, and then ask if our understanding of the inner pieces make sense given these measurements.

The #bitcoin-wizards have fallen into that trap before, when the btcd team announced the results of their benchmarking on large simnet blocks.

Their complete system measurements were quite a bit slower than what the individual component microbenchmarks suggested.

If you look at the comments made regarding their blog post, you'll see the same dynamic playing out as is here.

Conclusion: As the average blocksize gets larger, the time to verify the previous block also gets larger. This means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks or that they will be more motivated to trick the system.

you missed the most important alternative of all.

miners will pare down block sizes for maximum verification times and propagation times.

Don't get put-off from posting by Greg's feedback as it is easy to be silenced by someone-who-knows-best and let them run the show.My big takeaway from his comment is how you got him into arguing from a position that big blocks are OK (handclap and kudos to you). In fact, I have learnt now that even 7.5GB blocks are today theoretically tolerable in at least one respect (validation where tx are pre-validated), although l suspect not in quite a number of other crucial respects.Wasn't Gavin's doubling end-point 8GB in 2036? Effectively the same end-point!

Scaling Bitcoin can only be achieved by letting it grow, and letting people tackle each bottleneck as it arises at the right times. Not by convincing ourselves that success is failure.

One less bottleneck to expect with larger blocks.

ok, we've all been lead to believe up to now that validation of tx's had to occur twice by full nodes. first, upon receipt. second, upon receipt of the block. this was crucial to the FUD scare tactic of decrementing full nodes and thus "centralization" we were getting last month by the BS core devs via the "large miner w/ superior connectivity attack on small miners". now we here about this new mechanism of "pre-validation" that allows tx's to only have to be validated once upon receipt but not necessarily on receipt of a block?

We will continue do SPV mining despite the incident, and I think so will AntPool and BTC China.

Another very good reason people should not mine on Chinese pools. This is EXTREMELY bad for the Bitcoin network.

yes they will. and do you blame them? until the 1MB limit stops jacking up the unconf tx set thru either spamming or real demand, they can't be bothered to go thru the computation to order or pare down this larger than normal set. SPV is the easiest, simplest way to get around constructing a full block that has the potential to be orphaned.

We just need someone to figure out how to constantly feed them invalid blocks.

I'm surprised to hear this from you, Holliday. Can you explain why you think mining on the blockheader during the short time it takes to validate the block is bad? It is clearly the profit-maximizing strategy, and I also believe it is ethically sound (unlike replace-by-fee).

Are they mining on the blockheader during the short time it takes to validate the block, or are they simply skipping validation entirely?

Miners are supposed to secure the network, ehh? If they are just blindly mining on whatever they are fed, they aren't doing a very good job, are they?

i think they're reacting to the larger unconf tx sets being created as a result of the 1MB cap. they don't want to have to both verify the full blocks coming thru nor construct efficient block sizes from that set. if the cap was lifted, the unconf tx set should drop in size as faster validating miners can swallow the large unconf sets in one gulp and stuff it into a single block to keep the size of the set down thus discouraging further spamming and the deviant SPV behavior.

I see no problem with SPV mining while verifying the previous block. It would also make my above suggestion pointless.

Yes.

Moreover, I don't see why the time spent verifying is very significant. The vast majority of transactions would have been verified by a full node before being included in the block, and then it is just a case of checking the hash. The verification time for a block would then be insignificant relative to its download time.

Great point. Does anyone know how Bitcoin Core currently works in this regards? Is every transaction in a block (re-) verified? Or are the transactions first hashed, and if matched to a TX in mempool not re-verified, thus saving time.

but, but pwuille told us that a large block attack from a large miner would choke off small miners!

don't forget that the top 5 largest miners in the world have inferior connectivity, not superior, like he told us.

Yes, it's called cutting off the nose to spite the face. I've learned loooooooooong ago that miners are bottom feeders who have no concept of the idea of protecting their investment.

Remember, there are two issues here: SPV mining empty blocks while you're validating the previous block, and what F2Pool appeared to do that caused the problem: never getting around to actually validating the previous block at all!

I don't see why SPV mining empty blocks for the short amount of time it takes to validate the previous block is harmful to the network's health (I think it is actually helpful).

On the other hand, I see what F2Pool did as hurtful to the network, and they were punished accordingly with a loss of 100 BTC.

The banking crisis in Greece and the proposed 30% bail-in on balances of 8,000 euros got me thinking…There's actually a euro banknote printing facility run by the Bank of Greece in Athens. Is there any chance, given the political mess, that the Bank of Greece directly prints banknotes to meet withdrawal demands, thereby ending the bank runs? I realize this would be a no-no according to rules for eurozone membership but I wouldn't be surprised if such an idea gained popular support.

According to ZeroHedge, it looks like there might be something to this Euro Banknote Printing Facility in Athens:

The banking crisis in Greece and the proposed 30% bail-in on balances of 8,000 euros got me thinking…There's actually a euro banknote printing facility run by the Bank of Greece in Athens. Is there any chance, given the political mess, that the Bank of Greece directly prints banknotes to meet withdrawal demands, thereby ending the bank runs? I realize this would be a no-no according to rules for eurozone membership but I wouldn't be surprised if such an idea gained popular support.

According to ZeroHedge, it looks like there might be something to this Euro Banknote Printing Facility in Athens:

Very dangerous. ECB could respond by claiming all Y series notes not legal tender. Of course greece could presumably print other serial numbers easily. And it could invoke the nuclear option where all greek overseas bank accounts are frozen and ultimately confiscated.