I find this a really interesting point to discuss on the “ethical” side of forks.
The fork is now delayed because this introduces reentrancies into contracts and some of them have been found. The main reason I think for the delay is because most smart contract writers have assumed that send/transfer guards against reentrancies while actually this is not a feature of them: it is just that on the current network it is indeed not possible. Because most developers were “educated” this way we can a…

This might have belonged here better.

My main point of this post is discussing what we find OK and what not. I can create a contract which changes behavior if an unoccupied opcode in the past is occupied at a fork. Do we now assign all these opcodes as INVALID because a single contract suddenly changes behavior? Probably not. But what if there is 10% of all Ether in it? Can I hold the network hostage against forks via this?

Constantinople is delayed because possibly many contracts are affected. But what is the minimum amount of affection a fork may have to delay the fork?

Consider PaymentSharer example provided by ChainSecurity.
But instead of using it directly, lets create a new GeneralProxy to the initially deployed PaymentSharer instead of redeploying it. Proxy will reuse its logic (EVM_v1), but with own storage (EVM_v2) with cheap SSTORE.

Moreover, a more complex dispatching Proxy can redirect to different contracts deployed to different EVM versions. In which EVM version should Proxy operate?

May be we will have similar challenges using libraries.

I like the idea of EVM versioning, but it requires caution design through all the edge cases.

It’s hard to talk about immutability in a vacuum. While I look forward to Eth 2.0 being a fresh start, eventually that too will become bogged down with technical debt. There was some talk maybe a year ago about a multi-tiered system to satisfy both the risk-tolerant and the risk-averse (whether it’s different rules for different shards or something else). Of course this brings additional complexity, of which there’s already no shortage.

I still find it profoundly stupid to consider gas cost invariant (and judging by how little code this pricing change broke, maybe most developers agree?). Hardware and expenses associated with hardware change every year and as a result so do the relative costs of memory vs CPU vs storage usage. Maybe language tools can do more to prevent us from relying on gas cost for program behavior. I wish information about gas was completely inaccessible to contracts so that they would be unable to branch on it. I don’t want my program doing different things based on how much power it’s getting from the wall. It should either have enough gas to complete or not. Ideally gas costs should be market-driven in real time and I hope there’s a way to get there eventually.

I was hoping that this year Ethereum would scale 10x in terms of ops/s. It seems increasingly unlikely given how seriously we treat de facto invariants such as gas cost and how every time we fork/upgrade it’s like we’re defusing a nuclear weapon. Like everyone else, I want to have my cake and eat it too. Maybe this means focusing on Layer 2.

The backwards compatibility of x86 is a helpful example, and I’m thankful that @jpitts brought it up. I’ve heard @gcolvin speak about this before as well. But lest we compare ourselves too closely to Intel, I just want to point out a glaring difference: ours is an adversarial environment where the attacker can see, and execute, code on our “machine” at will. For this reason I think we should adopt a different set of principles and priorities in our design, and safety should be an even higher priority for us.

CPUs live in an adversarial environment as well. Bugs in their chips can break an unknowable number of programs and open an unknowable number of security holes. So adversaries are always looking for bugs. And as @jpitts@rajeevgopalakrishna point out, Intel takes backwards compatibility seriously. “We put the backwards in backwards-compatible.” The architecture of the original Intel hand calculator is still visible in their current chips, and the code for it still runs.

Whether gas should be immutable shouldn’t be a difficult question. That hand calculator had performance limits that are far below current chips. Should current chips be purposely hobbled to match it?

Just to be clear, I’m not suggesting it be invariant, just that, if we lower the gas cost of an opcode, we do it by introducing a new, cheaper version of the opcode. Or we use engine versioning, as discussed here (I like @arachnid’s proposal)–they achieve the same thing wrt gas pricing. Or maybe we need to think outside the box more and introduce multiple tiers, as you suggest–these could be shards, or they could even be at layer two. There’s something elegant about the idea of shards running different engine versions, since it could provide an economic incentive (cheaper gas) for developers to migrate contracts from older shards to newer ones. This is a step towards gas costs being market-driven as you suggest.

I’m not suggesting it be invariant, just that, if we lower the gas cost of an opcode, we do it by introducing a new, cheaper version of the opcode.

I don’t think the problem is that lowering (or changing) costs is dangerous per se. The EIP-1283 bug involved subtle assumptions about specific gas costs that were commonly used for a particular purpose. I actually don’t expect there are very many of those, and lots of other programs could be written whose behavior would change if certain gas costs got lower with no complaints at all–they would just be able to do more of what they do before running out of gas. Which is the whole idea of gas.

So, adding the complexity and working out out all the edge cases of a new versioning, tiering system or context passing system, or of offering up whole new sets of replacement opcodes, (e.g. all of the arithmetic opcodes) before we can change the gas cost of opcodes that are overpriced? That all sounds like jumping out of the frying pan and into the fire. We need to do some sort of versioning at some point, but not so that programs can learn which gas price regime they are running under. I think it just needs to be made clear that gas prices are subject to change without notice.

I think it just needs to be made clear that gas prices are subject to change without notice.

I totally agree. Also, there’s a historic precedent to this, so people/code should not make assumptions about particular gas costs. Doing so means that the dev has gone off the trail, like doing some evm experimentation with assembly.

If, OTOH, we find that solidity has some implicit assumptions about gascost, then we should try to respect that (IIRC, there were some early assumptions about the gascosts when using the IDENTITY precompile, which we had to very carefully tread around when we changed how the 63/64ths rule worked)

Right, but where do we draw the line going forward? Are we comfortable changing gas costs? Then why weren’t we comfortable doing it in this case and how will it be different next time? How do we communicate this to developers and make sure they factor this in so that future changes don’t break
“invariants” that they should not be relying on?

Agree with the rationale expressed by @gcolvin & @holiman and the questions/suggestions from @lrettig & @fubuloubu. We cannot prevent the creativity of developers (if inline-assembly is supported by a language, it is fair game) but only anticipate them, and hence establish well documented guard-rails on invariants/assumptions and any guarantees on backward-compatibility/interoperability going forward. This is going to be even more critical with all the upcoming changes, such as ewasm.

Regarding breaking invariants, here’s another example. Even since the Devcon in Mexico, I’ve been trying to raise awareness of the fact that CREATE2 will break an invariant: a contract A which has code C at one point in time might have code D at another point in time. (Back then, it was another EIP but same effect).

This is something that @AlexeyAkhunov also found out indedependently, despite the discussion having taken place in various places. Do contract developers know about this? It’s really hard to say. Despite all the discussions, I still believe that a lot of devs aren’t aware of this. It might not make any difference in 99.9% of all cases, but OTOH might make all the difference in 0.1% of the cases.

My take is that we need to produce an ‘Audit’, which should be based on a common template, which focuses on things like this. That audit should be performed by people who are both evm-nerds (and I don’t mean that derogatory, I count myself as one) and also know contract-development. The audit should be commissioned as soon as an EIP is accepted, and when done, it should be stored in the EIP repository and published far and wide.

Are we comfortable changing gas costs? Then why weren’t we comfortable doing it in this case and how will it be different next time?

We have to be comfortable changing gas costs–they are parameters meant to be adjusted to match the cost of operations on current hardware. We were uncomfortable because we had not anticipated that the precise gas stipend of 2300 would combine with assumptions about permanent gas costs to create reentrancy. locks. And yes, I suppose we should worry about every other useful trick could one pull off that way. A precise “gas fees are subject to change” in the Yellow Paper would help. And close scrutiny of any other bare numbers in the protocol that might be relied on.