If I loop over mapping into a struct the gas fee is very low. However, in trying to test its limits, the EVM often causes Remix to crash in creating edge cases, because I'm creating blocks of 50 new structs. It does not crash on the loop where I read data and do some algebra, however, and in practice new structs would be added one at a time.

For example, I find that I can basically get to a loop that runs through 7000 items if I extrapolate from a mapping with 200 structs. However, given Remix is often crashing, and it takes a long time to run this loop (seemingly more than proportional to the gas), I wonder if in practice I should lower my estimate of the maximum set of struct members some percentage.

1 Answer
1

Remix IDE is not authoritative. If it crashes, it's a failure of the simulation. In the end, the network block gasLimit is a hard stop. That value is voted on by miners and has been known to drop during periods of high congestion. Caution about cutting it too close.

Another thing that jumps out is the storage i/o cost. It all looks like a classic approach without much regard to the paradigm changes in Ethereum. You need to economize on storage and transaction complexity and be mindful of gas costs.

Instead of iterating over the structure, consider tallying that total as you go. This has desirable effects:

The cost of updating it is borne by the users who are updating something.

The work is amortized. That is, each transaction does a little work to keep the value current. That will be O(1) so the gas cost per transaction will be scale-invariant (the same in any case).

Yeah, loops are bad, but I was more curious about whether accessing such a struct using the extreme example of a loop reading from n items, was O(n^2).
– Eric FalkensteinNov 19 '19 at 15:10

No, it doesn't compound. The mapping helps zero in on a "slot" in storage known by a unique key. More than likely, just Remix jamming up. I get that as well on monster contracts. It doesn't mean the contract is broken. It's stability in the browser, etc.
– Rob HitchensNov 20 '19 at 1:19

I really appreciate your comments! Reading your 'Getting Loopy' post, you recommend to not 'Search for a record of interest in a list of unlimited size.' Yet, if I am not looping, it seems my struct can accumulate an unlimited size without hurting efficiency. That is, if all my interactions are piecemeal, say pulling the struct data for customer 73, it doesn't matter if my struct has 100 or 1e5 customers other customers.
– Eric FalkensteinNov 21 '19 at 17:30

That's the whole idea. You make it so the size of the dataset is irrelevant. In fact, that is what you must do in smart contracts.
– Rob HitchensNov 21 '19 at 19:34

So then, it what way is it bad to 'search for a record of interest in a list of unlimited size'? Only if your are looping or doing something similar?
– Eric FalkensteinNov 21 '19 at 23:11