Rewrite the old parts of the blockchain so all transactions that happened more than 24 months ago are condensed so that they total to the same amount but merge somewhat. Then slap a warning saying they may be inaccurate.

I'm inclined to agree with gmaxwell that an off-blockchain transaction infrastructure is the answer. Seems like it would be much cheaper, and more convenient and private/anonymous, anyway. And with multisig/P2SH, it seems like it could be very secure against operators running off with the bitcoins people have bailed onto the tx servers.

OTOH, if this infrastructure isn't available when the block size limit is bumped up against and transactions start getting delayed and expensive, I doubt developers will be able to resist demands to increase the limit.

If it's not ready in time, could we ever revert back when it is, or would there be kind of a ratchet effect to this?

Change in blockchain size must be supported by super majority of miners to avoid a split in the network (yeah technically 50% + 1 hash is sufficient but it would be a disaster).

Fees are essentially 0. The few satoshis paid in fees per block are a rounding error. I doubt many miners will be supporting raising the block size any time soon especially w/ the subsidy being cut in half.

Still it is a totally non-issue. Block size is 500 KB. Average tx is ~ 500 bytes. So the current block size is good for ~180K daily tx. We are a small fraction of that. If (due to economic pressure) some of the spam (satoshi dice, miner's taking 2 bitcent payouts, etc) was reduced we likely wouldn't even be 2K tx.

OTOH, if this infrastructure isn't available when the block size limit is bumped up against and transactions start getting delayed and expensive, I doubt developers will be able to resist demands to increase the limit.

I haven't done the benchmarking to fully figure out exactly where a standard PC peters off, but I'm pretty sure they can process somewhat more than the current limit, at least if they're SSD equipped. So even if you're a full card-carrying member of my Church of Forever Decentralization, whos doctrines requires that the maximum block sizes stay quite small, you could still support a bit of a bump.

Quote

pushing fees up and txs to occur off the blockchain on, e.g. Open Transactions servers

It's worth mentioning that beyond escaping the limits external things can have other advantages too. For example, even getting a _single_ confirmation in Bitcoin (the minimum required to resist reversal attacks without using a trusted certification serice) can take a long time— 10 minutes is an _average_, but 30 minutes or longer happens about 7 times per day, an hour or longer every 2.8 days, etc. And even though Bitcoin with the block size limits removed could be coerced to insane scaling levels, it would be a fairly storage and computation inefficient way to process all the world's transactions. D'aniel also points out the considerable privacy/anonymity advantages other systems can have over Bitcoin (and add to Bitcoin when used along with it)

Quote

Or will it be raised somewhat after some scalability optimizations are implemented?

The limit can't be raised at all without a hardforking change (old nodes will not accept the new chain at all once the first oversized block is mined).

It's not sufficient to change miners, as DeathAndTaxes suggests— the lifting the 1M protocol rule is a change unlike the BIP16/P2SH change which was fully compatible with old nodes. It's technically the same kind of change needed to adjust Bitcoin from 21m total BTC to 42m total BTC (though obviously not politically equal). Every single piece of Bitcoin software produced would have to be updated to allow the oversized blocks

If the Bitcoin system were to take a hardforking change, switching to Ed25519 would remove ECC signature validation as a performance bottleneck, as a fast quadcore desktop from today can do about 50k Ed25519 validates per second, compared to perhaps a thousand for the curve we use... though the random IO is still an issue.

More recently a number of people have independently invented the idea of committing to a merkle tree of open transactions. If we do adopt some form of this it would allow the creation of nodes which are someplace in between SPV and a pruned full node in terms of security and decentralization benefit— so lower operating costs for nodes that validate. (In particular these nodes would have greatly reduced storage requirements)

Quote from: Theymos

No. IIRC Mike Hearn supports moving most nodes to SPV. My impression was that Satoshi also expected most nodes to use SPV. Not sure about the opinions of other developers besides gmaxwell.

Indeed, and Mike's position has gotten us (rightfully) flamed as not-decenteralized by e.g. Dan Kaminsky.

Gavin and Jeff have taken less strong positions than I have on the importance (and viability) of maintaining decenteralization in Bitcoin. Although I expect to convince them eventually, I think _everyone_ is in a wait and see mode. Who knows what will happen? At the moment I would aggressively argue against raising the limit— without it I don't see any alternative to Bitcoin becoming a particularly inefficient distributed system of establishment central banks— but I fully admit my position may change as things develop.

I expect most Bitcoin users by count to be not even SPV— I expect most by count to be semi-SPV thin-clients (which may connect to a couple independent services). But expecting most users to be on not nodes does not preclude there being hundreds of thousands of nodes which perform complete validation, but gigabyte blocks surely would.

Still it is a totally non-issue. Block size is 500 KB. Average tx is ~ 500 bytes. So the current block size is good for ~180K daily tx. We are a small fraction of that. If (due to economic pressure) some of the spam (satoshi dice, miner's taking 2 bitcent payouts, etc) was reduced we likely wouldn't even be 2K tx.

Ah, didn't realize so much of it was spam that wouldn't occur if transactions weren't basically free. Still, though, 180K transactions/day is only ~2tps, or 0.1% of Visa, so hopefully this issue won't arise too far into the future

Another reason I can think to keep the limit is I believe the client software that talks with the tx servers would be engaging in real-time audits (for OT, anyway), and would thus require running a bitcoin client (something the average PC would be able to do because of the block size limit). While smart phones would use SPV (or the merkle tree of open transactions gmaxwell just mentioned) to audit, there would still be a lot more fully verifying clients out there. This is important, I think, because

Lightweight clients can't efficiently calculate the size of the coinbase for a block without downloading the whole block and then downloading the dependencies of every transaction in that block, along with the Merkle branches linking them to the relevant block headers (which may also need to be fetched because I think in future lightweight clients will throw away very old headers).

This means the inflation schedule can be much better enforced by staying decentralized, correct?

This means the inflation schedule can be much better enforced by staying decentralized, correct?

Yes, but I am not expecting key players to leave behind Satoshis code any time soon: miners, trading platforms and merchants should all be sticking with it. So even if mobile and desktop users followed some kind of inflationary fork without realizing, you'd still have to convince a majority of the miners, AND the merchants, AND the exchange operators.

That said, I expect using libraries like bitcoinj in combination with a regular Satoshi node to be quite common in future just because the programming model is simpler.

I don't think we need to worry about the block size limit any time soon. There are quite a few ways of using bitcoin that let you push non-time-sensitive transactions off to the night time when blocks should be less full - even very simple tricks like that could buy plenty of time to introduce a hard forking change.