Cryptocurrency and Altcoin News, Prices and Rumors

Vitalik Buterin Responds to the Ethereum Blockchain Size Concerns

Ethereum’s Blockchain has become a subject for the hot discussion because of its quickly growing size with many industry observers noted that this may cause the some issues such as difficulties in archiving or synchronizing. Vitalik Buterin the co-founder of Ethereum came out against the criticism on the blockchain size addressing concerns as “severely uniformed”.

The Root of the Debate

Hackernoon reported on may 24 that the growing block size of the Ethereum blockchain will result in a shrunken, centralized network that will be more than that of hardware and bandwidth capacity of the average network participant. And this analysis and the report made that controversial talk on the blockchain.

The development and release of the dApps, smart contracts, and thousands of ERC-20 based initial coin offering had made a rapid growth of the Ethereum Blockchain. And because of this many analysts predicted that the coin may crash soon if a new solution to the network propagation is not found.

And it was to this that the Hackernoon had released the analysis that the Ethereum will have to find a solution to prevent the problem with will cause the network to “race BCash to both of their deaths.”

The analysts said on referencing the latest flood of dApps have an negative impact on the Blockchain and the implementation of a block size cap will increase the fee. If the functioning of the decentralized applications are prevented the result will be the invalidation of the existence of the harga dagcoin Ethereum network.

Vitalik Buterin’s Response

The co-founder had quickly came out with response to the flawed arguments pointing that the analysis as “severely uninformed.” And he stated that the Ethereum Blockchain have a limited size in the form of gas limit which has been present for over six months now. And also he said that the argument of Hackernoon reported as “highly fallacious” stating that focusing on the size of the achieve node is relevant, as a lower size of data-dir can be achieved by running self-pruning Parity node or resynchronizing once a year..