Easybench is designed for benchmarks with a running time in the range 1 ns < x < 1 ms - results
may be unreliable if benchmarks are very quick or very slow. It's inspired by criterion, but
doesn't do as much sophisticated analysis (no outlier detection, no HTML output).

An iteration is a single execution of your code. A sample is a measurement, during which your
code may be run many times. In other words: taking a sample means performing some number of
iterations and measuring the total time.

The first sample we take performs only 1 iteration, but as we continue taking samples we increase
the number of iterations exponentially. We stop when a global time limit is reached (currently 1
second).

If a benchmark must mutate some state while running, before taking a sample n copies of the
initial state are prepared, where n is the number of iterations in that sample.

Once we have the data, we perform OLS linear regression to find out how the sample time varies with
the number of iterations in the sample. The gradient of the regression line tells us how long it
takes to perform a single iteration of the benchmark. The R² value is a measure of how much noise
there is in the data.

TL;DR: Compile with --release; the overhead is likely to be within the noise of your
benchmark.

Any work which easybench does once-per-sample is ignored (this is the purpose of the linear
regression technique described above). However, work which is done once-per-iteration will be
counted in the final times.

In the case of bench_env, we also do a lookup into a big vector in order to get the
environment for that iteration.

If you compile your program unoptimised, there may be additional overhead.

The cost of the above operations depend on the details of your benchmark; namely: (1) how large is
the return value? and (2) does the benchmark evict the environment vector from the CPU cache? In
practice, these criteria are only satisfied by longer-running benchmarks, making these effects hard
to measure.

If you have concerns about the results you're seeing, please take a look at the inner loop of
bench_env. The whole library clocs in at under 100 lines of code, so it's pretty easy
to read.

Each benchmark collects data for 1 second. This means that in order to collect a statistically
significant amount of data, your code should run much faster than this.

When inspecting the results, make sure things look statistically significant. In particular:

Make sure the number of samples is big enough. More than 100 is probably OK.

Make sure the R² isn't suspiciously low. It's easy to achieve a high R² value when the number of
samples is small, so unfortunately the definition of "suspiciously low" depends on how many
samples were taken. As a rule of thumb, expect values greater than 0.99.

Oh, fib_2, why do you lie? The answer is: fib(500) is pure, and its return value is immediately
thrown away, so the optimiser replaces the call with a no-op (which clocks in at 0 ns).

What about the other two? fib_1 looks very similar, with one exception: the closure which we're
benchmarking returns the result of the fib(500) call. When it runs your code, easybench takes the
return value and tricks the optimiser into thinking it's going to use it for something, before
throwing it away. This is why fib_1 is safe from having code accidentally eliminated.

In the case of fib_3, we actually do use the return value: each iteration we take the result of
fib(500) and store it in the iteration's environment. This has the desired effect, but looks a
bit weird.

The function which easybench uses to trick the optimiser (black_box) is stolen from bencher,
which states:

NOTE: We don't have a proper black box in stable Rust. This is a workaround implementation, that
may have a too big performance overhead, depending on operation, or it may fail to properly avoid
having code optimized out. It is good enough that it is used by default.