HiddenAgendas:TheBitcoinBlocksize

Last week I discussed matters concerning market manipulation, and stressed that although Bitcoin is immune to direct centralized governance, it is not impervious (like all markets) to market manipulation, and interference via social engineering.

One point concerning “social engineering” that deserves its own write-up is the blocksize debate.

The ever-intensifying Bitcoin scaling debate seems to have no end. By design, given that any fork requires a clear majority vote one way or another means that major protocol changes are difficult to implement across the board.

While Blockstream’s Core team try to push Segwit, as their ‘scaling’ solution, they refuse on all accounts to increase the blocksize in the interim. It can be argued that Segwit increases the effective blocksize capacity to just over 2MB, but according to the latest data on transactions and fees, this limit would itself be hit soon, which would again require a revisit of the same debate.

Segwit’s main competition at the moment comes the Bitcoin Unlimited client. A client which is in many regards the same as the pre-Segwit Core client, except that it allows for miners to vote on what the Blocksize should be via emergent consensus. The details of this consensus method are best described on Bitcoin Unlimited’s official website, but in short, it means that miners that create the backbone of the Bitcoin eco-system, can vote on, and with consensus, choose to fork into a bigger blocksize, and hence allow much greater transaction throughput.

It should be noted, that there are now a large number of Bitcoin dev teams, and that Core, and BU are just two names among many. As time passes, failure to reach consensus means that this number will only grow. – But this isn’t necessarily a bad thing just yet.

At present, Bitcoin transactions are stifled at mere 2-3 transactions per second, with a theoretical potential of 7 tps under optimal conditions. In order to get a transaction through the system, users are forced to pay ever increasing, competing fees. For a global payment system, this throughput is at best a joke. For this reason, any serious discussion concerning scalability requires that both on-chain and off-chain scaling solutions take place.

So how did we end up in this gridlocked mess where consensus is so hard to achieve?

If we go right back to the beginning, there was no Blocksize limit set at all, and Bitcoin was operating without this limit just fine. But clearly Satoshi Nakamoto wanted to keep its early days of implementation at a small scale, and thus, on Thursday July 15, 2010, input the following code:

Static const unsigned int MAX_BLOCK_SIZE = 1000000;

By doing so, it provided Bitcoin with a safeguard against spam and dusty transactions, so that if anyone was to flood the system with useless transactions, it would reach a limit and enforce fees.

Although Satoshi didn’t include anything in the changelogs concerning this line of code, he did however, leave some information in forums. Notably:

“The dust spam limit is a first try at intentionally trying to prevent overly small micropayments like that.” – August 4, 2010.

“We can phase in a change later if we get closer to needing it.

“it can be phased in, like - if (blocknumber >115000) maxblocksize = largerlimit”

There is no doubt Satoshi’s plan here was that this artificial limit was merely a spam prevention limit which was always supposed to be temporary.

Ofcourse the loudest chants for small blocks come from the Blockstream Core group. As mentioned Segwit provides only a very mild increase, and Core refuse to raise the blocksize in the interim.

The main argument against increasing this 1MB limit to anything substantial is that it will threaten Bitcoin’s decentralized state.

But research strongly suggests that Bitcoin’s network can handle much more than the current restrictive 1MB cap.

The BTCSIM Bitcoin simulator by Javed Khan and Michalis Kargakis, showed that a 32MB blocksize could successfully hold 167,000 transactions, which translated to 270 tps. A single machine acting as a full node took approximately 10 minutes to verify and process a 32MB block. And this simulation was done in 2014.

A much more elaborate study was done in 2016 however at Cornell University. The Cornell study recommended a 4MB blocksize without affecting decentralization. The study also stated that a 4MB blocksize would result in a capacity of 27 transactions per second. That is 10 times the current capacity, even though the blocksize is only quadrupled.

So if the evidence is overwhelmingly that on-chain scaling is not a bad thing, and that it will only help in Bitcoin’s utility by allowing more transactions, for lesser fees, - then why not undertake this simple code change – which is literally changing only a few lines of code… ?

If you try to seek an answer by asking “small blockers” directly, you may find yourself going around in circles, as the answers tend to constantly shift the goal posts. But goal posts will only change, when there is a lack of substance to an argument. The answer lies in hidden agendas.

By not allowing even an interim 2 or 4MB blocksize increase while we await other scaling solutions, Core are effectively sending out the message that 1MB is the right max_block_size number for now. The rationale to this really defies logic, as the “1000000” (1mb), number that Satoshi input into the code follows no real decision point, but rather, a simple round numbered limit, that was never supposed to be reached. Except that we did reach that number, and as a result, many users are now paying well over 1USD per transaction.

This wasn’t Satoshi’s plan.

The idea that 1MB is this magical number we shouldn’t change unless we get Segwit is holding the Bitcoin community hostage. But to prevent from anyone holding Blockstream Core to this charge, Core dev Luke-jr, is even on record claiming that the 1MB limit is too high… On what basis!? Apparently, Core wants everyone to be paying well over 5USD per transaction each time.

So the only logical conclusion one can reach by analysing Core’s behaviour is that they want to implement Segwit sooner rather than later, and at any cost. That is, despite the fact that there are cleaner malleability fixes out there, and that there are better 2nd layer-ready clients.

Take Wladimir van der Laan’s quote from the dev mailing list on May 2015:

- “A mounting fee pressure, resulting in a true fee market where transactions compete to get into blocks, results in urgency to develop decentralized off-chain solutions. I’m afraid increasing the blocksize will kick this can down the road and let people (and the large Bitcoin companies) relax, until it’s again time for a block chain increase, and then they’ll rally Gavin again, never resulting in a smart, sustainable solution but eternal awkward discussions like this”.

This is intentional measure of keeping the blocksize low to achieve a desired purpose is known as “the strategy of degradation”. This strategy is commonly used by those seeking a revolution among a people, community, or nation. (I will discuss this specific detail next week)

But not only is Wladimir (above), attacking the intellect of every Bitcoin user, he also suggests that they cannot accurately create proper scaling solutions unless they are pressured to do so.

Such assumptions will do nothing but go on to cripple Bitcoin’s growth more and more. While other crypto-currencies feature dynamic or plain larger block sizes, Bitcoin remains at an illogical 1MB.

Let’s never forget that Blockstream has raised millions of dollars in venture capital funding, most of which has come from established banking institutions. The very institutions which Bitcoin is liberating users from. After all, where there is money, there is power, and greed. The ‘strategy of degradation’ is just one method that is employed by those seeking such power.

Australasian Association for Engineering Education. AAEE (a technical society of Engineers Australia) is a professional association of academics, support staff, postgraduate students, librarians, professional engineers, and employers who all have vested interests in fostering excellence and innovation in engineering education.

There's a lot of hype concerning "the cloud". A lot of the time, it is just that - "hype". But for many businesses, working on the cloud is the right solution. Tunnelling Solutions is one of these businesses.

While many web tech companies have come and gone, WM Studios has truly stood the test of time, delivering on a multitude of projects throughout it's history, with total project costs in the billions of dollars. 2015 was no exception.