Vitalik Buterin and Peter Todd Go Head to Head in the Crypto Culture Wars

Tesla and Edison, Hayek and Keynes, Jobs and Gates. Like protons and electrons, great inventions tend to have rivalries, usually based on different visions or different ways of foreseeing future developments or, as in this case, different fundamental assumptions.

A twitter debate between Vitalik Buterin, ethereum’s inventor, and Peter Todd, a Bitcoin Core developer, concerns what we can say is a fundamental technical argument from small blockers. They argue we can not scale on-chain, because we can not have light-client nodes, because we can not construct what is called fraud proofs.

“Eventually when we have client-only implementations, the block chain size won’t matter much. Until then, while all users still have to download the entire block chain to start, it’s nice if we can keep it down to a reasonable size.”

He provided Hearn with the code for implementing light-client nodes, which he did do and are now known as SPV wallets, light wallets, or usually just bitcoin/ethereum wallets.

Gregory Maxwell and Todd have argued that these are not real light-clients because they have lower security than full nodes because you can’t construct fraud proofs.

The recent announcement of Plasma, which uses fraud proofs, gave Todd the opportunity to re-instate that argument, opening a “debate” with Buterin after Todd stated: “Sounds like I need to do a writeup on why fraud proofs don’t work…”

We don’t know what’s keeping him busy from doing the write-up, but Buterin replies by linking to a detailed document which explains how fraud proofs can work.

“Suppose that an attacker makes a block which is invalid in some way (for example, the post-state root is wrong, or some transaction is invalid or mal-formatted),” Vitalik says. “Then, it is possible to create a “fraud proof” that contains the transaction and some Merkle tree data from the state and use it to convince any light client, as well as the blockchain itself, that the block is invalid.”

Of course, if the client-only node knows what block is invalid, it can then reject it and continue working as normal with valid blocks as if nothing happened.

However, there is an attack vector whereby a miner “creates a (possibly valid, possibly invalid) block but does not publish 100% of the data.” As not publishing data is not a “uniquely attributable fault,” a node can only raise the alarm in saying there is missing data. But the attacker can then publish the missing data, with the rest of the nodes that were not paying attention left to wonder whether it was a false alarm or data was really missing.

The solution is erasure coding which “allow a piece of data M chunks long to be expanded into a piece of data N chunks long (“chunks” can be of arbitrary size), such that any M of the N chunks can be used to recover the original data.”

That is combined with random sampling by downloading “twenty random chunks of a block… and only accept the block if all twenty requests get a valid response.”

So the attacker then would need to do a lot of work to fool your node to the point where it’s hardly worth it. They still can, so a full node remains more secure, but security isn’t black or white. Even a full node can be fooled by a 51% attack. If the required resources to fool a client-only node reach comparable levels, then you could say it is pretty secure.

Todd dismisses the above, stating: “Erasure coding has been known for ages as an attempted solution to the fraud proof problem… It’s also been known for ages that it doesn’t work, as large-scale financial fraud can be hidden in arbitrarily small amounts of data.”

“The whole point of erasure coding is that it means any unavailability in the original requires 50% unavailability in the encoded data.” Vitalik says, plus it is mixed with random sampling.

What follows is interesting and might show two very different fundamental assumptions. Todd says:

“That’s not magic: you increase the size of the data, in exchange for being able to lose parts of it. But min data needed ~same as before. That’s why you need that silly honest minority assumption to have sufficient overlap of lite client requests to ensure sufficient coverage. But that assumption is ultimately no different from assuming you have a set of honest peers that collectively audit the whole chain.”

The latter part is the reason why client-only bitcoin wallets are so widespread. They work, pretty securely, because you can connect to a set of honest nodes and be pretty sure your money is safe.

There are attack vectors. You could be targeted by an attacker springing up many dishonest nodes that connect to you specifically, but that requires an incredible level of effort, which is why there has been no known case of such attack in practice.

However, in Todd’s world view, pretty safe is not sufficient. In the most striking statement of more than two years of scalability debate, Todd says:

“The 67% honest side is crazy: you have to have a system that’s robust even if a majority are dishonest. Note how tree chains is trivial if you can make crazy assumptions like a majority are honest…”

Todd’s argument, therefore, is valid, but only in a system that requires no assumption whatever of honest behavior. As he himself says, he couldn’t make Treechains – or really anything – work in such a system.

As far as bitcoin is concerned, thankfully for us, it does not require such assumption. It has long been conceded, by Nakamoto himself, that the system would not work if we do not assume that 51% are honest.

As it happens, bitcoin has a near decade long history of proving that it works without Todd’s assumptions and with Nakamoto’s assumption. As do light clients, as do 0-confirmed transactions and as probably do fraud proofs.

As such, it appears the two sides have been arguing past each other for the last 2-3 years. One, concerning actual bitcoin, and the other concerning some hypothetical bitcoin that does not exist and probably can-not with our current level of knowledge.

Something Todd himself admits by conceding he couldn’t create anything that works in an environment with his assumptions. Which is why Nakamoto is held as a genius while Todd has earned himself the nickname of toddler, further shown in action later in the “debate” where Todd publicly says:

“To be clear, at a high level it’s 100% a personal attack: I’m showing that Vitalik isn’t competent.”

Buterin has successfully implemented a highly valuable project that has taken the world by storm and has fired much imagination. Todd, in contrast, has absolutely nothing whatever to show for himself, and with such flawed assumptions as shown above it’s not very surprising.

He used to be the lead developer of an altcoin that has fallen into such obscurity we can’t even name it from memory. Nor can we really recall what Treechains was about.

His most notable success is a protocol bug called Replace-by-Fee (RBF), which re-introduces double spending in bitcoin even though the whole point of the entire system is to prevent double spending.

Besides that “achievement,” we can say he has zero to his name and finally we understand why. Instead of focusing on the real world, Todd appears to be focusing on a fictional project that does not have a fundamental assumption of honesty, a project that does not exist and might perhaps never exist.

Let’s further illustrate his line of thinking to give further context to his public admittance that he is engaging in “100% a personal attack.” He asks why Proof of Work in bitcoin has a two weeks delay in adjusting mining difficulty and even then adjusts by only a maximum of 25%.

Buterin provides a number of reasons, but he wants a specific reason regarding a specific attack he has in mind. That attack vector is described by Kevin Loaec, a coder who calls himself a “bitcoin maximalist.” He says:

“I prevent your node to see the main chain, and feed you my minority chain. I can chose which (valid, of course) tx you see. I can also create txs to you that I do not broadcast to main chain (or already double spent), and I can replay your txs on main chain.”

That’s a targeted attack that would work if your node is logged off for two months, you automatically act on six confirmations, and of course you don’t check what your client is doing, or a block explorer or somehow fail to become aware of this new chain before acting on it.

It may work if you’re just starting a node, but the other side of the assumption is not stated. That is, for you to be targeted, the attacker also needs to know many things or assume many things, such as you are starting a node or have been off for two months.

Moreover, considering the significant sums the attacker is investing in mining this minor chain, he also needs to be pretty sure the attack will succeed as well as be pretty sure if it does succeed the reward is greater than if he just honestly mined, something which appears very unlikely considering the many ifs that are in play.

However, he could succeed. In a fictional universe, it is possible. Therefore, if we ignore the many assumptions regarding the fictional attacker and whether it would make any sense considering the costs of such attack, things like fraud proofs or 0-confirmed transactions or on-chain scaling do not work.

E poi si muove. Years of experience have shown light clients to be safe. Years of data have shown 0-confirmed transactions to be safe. Not perfect, but better than anything else and certainly better than the current system with its charge-backs and plenty of frauds.

More perfect, in any event, than anything Todd has created in his Quixotic chase of windmills on fictional worlds that conveniently ignore objective reality and how things actually do work.

Writers bias is leaking through. If you read to the end of the 51% 67% argument it’s clear the attacks are completely different for each case, especially compared to pure PoW – the writer doesn’t know difference between sharding and pow attacks. Vitalik is only known for centralized vaporware called ethereum and running a quantum scam Peter Todd is known to be one of the best applied cryptographers in the entire field and the person pushing the field forward. See example of him studying sharding and plasma equivalents half decade before Vitalik thought of it. He forgot more than the… Read more »