The criterion benchmarking package is pretty nice to use in general, but it has a property that sticks in my craw: ensuring that pure code actually gets evaluated on every iteration through the benchmarking loop is awkward and fragile.<div>
<br></div><div>My current scheme is to increment an Int on every iteration and pass that to the function being benchmarked. I document the need for the function to somehow use this Int, lest it be let-floated out to the top level, where it will be evaluated just once and our measurements will become nonsensical.</div>
<div><br></div><div>I wonder if anyone else has any clever ideas about approaches that might be less intrusive in the API.</div>