I have a project that I'm working on where I need to measure the amount of time between two inputs. Normal times would be around 4000 microseconds, going as low as 200 microseconds. I wrote two versions of the code, both of which work fine, but the question is, which one is more accurate? One uses loops that repeat if statements so I get the maximum polling frequency, the other uses interrupts.

BTW, both use the internal pull up resistors, so the logic is reversed.

interrupts stop everything when a condition is met, to deal with that condition. with a loop you have to wait until it rolls back around to deal with the condition, which may or may not be in the same state as it was when it changed