It looks like Don Kinzer is off to a good start with it (in hptics):http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1226257074/1#1

though with the divisions it is a bit more computing than I'm proposing, which at it's core could be expressed as: ((m << + t) <<2;

I wanted to keep it light so you can get a new microseconds timestamp from an interrupt without much concern, so it overflows every 128 seconds internally instead of computing the seconds portion as well, but just adding timer0_seconds * 1000000 will get you there.

But I figure being able to track any number of signals at low microsecond accuracy with a period of up to just over 2 minutes is pretty useful.

I agree with dcb that there seems to be a great deal of interest in built-in microsecond and "macrosecond" timing. Thanks for the thought you've put into this!

I'm a little uncomfortable with the having values that overflow on non-word boundaries (like the 128 seconds proposed above). It makes it difficult to compute time deltas. Remember the trouble we had because 0011's millis() didn't overflow on the word boundary? I think it's important to be able to write delta expressions like:

while (micros() - start > 20) // wait 20 microseconds

without worrying about how the overflow will affect the calculation.

David, I like the way the current wiring.c implementation handles SIG_OVERFLOW0. I don't see that we need to change it to build a micros() function that overflows at the 32-bit boundary. This should work at both 16 and 8 MHz without changing wiring.c at all:

but would solve many problems with "macrosecond" timekeeping. With this technology, anyone can track total micro/milliseconds elapsed without having to periodically call some kind of library refresh function every 49 days or 4 years or whatever. Overflow measured in centuries.

David, to my thinking, date/time functions like hour(), day(), etc. don't belong in the wiring.c "kernel". Tracking "clock-based" time seems to be a different problem than tracking "elapsed" time. While nearly every Arduino project depends on the latter -- millis() and delay(), etc. -- the need for "clock" time seems more limited. In my opinion, this should be placed in a library -- like mem's DateTime!

The code dcb suggested for a microseconds function has the same problem that I also had in my first proposed code for hpticks(). The problem is that the if the timer overflows between the time the cli instruction is executed and when the TCNT0 register is read, the result will be off by 256 timer ticks.

This problem can be resolved by checking for the timer overflow flag being set and the value of TCNT0 being 0, indicating that the timer just rolled over. The difficulty with implementing is is that the name of the register containing TOV0 is different depending on which AVR is being used. A suggested solution, which addresses the register name difference, is shown below.

The CPU speed dependency can be removed by replacing the return value computation with that shown below. This has the limitation, however, of working correctly only when F_CPU is an integral multiple of 10^6.

The CPU speed dependency can be removed by replacing the return value computation with that shown below. This has the limitation, however, of working correctly only when F_CPU is an integral multiple of 10^6.

Doesn't it also rely on F_CPU being divisible into 64 million? I bring this up only because there has been some discussion of supporting processors at 20MHz and it seems like this might cause a problem if F_CPU were 20000000.

Doesn't it also rely on F_CPU being divisible into 64 million? I bring this up only because there has been some discussion of supporting processors at 20MHz and it seems like this might cause a problem if F_CPU were 20000000.

Quite right. I overlooked that aspect of it. In my implementation of a higher precision timing function (see the link in my previous post), the function returns Timer0 ticks. This mitigates the 20MHz problem or, at least, defers it until conversion to microseconds is later done.

The Arduino-like device that I'm testing runs at 20MHz and my hpticks() function works correctly on it (the second attempt, at least). One issue to consider is that the suggested implementation of a microseconds function has a resolution of F_CPU / 64 since it is based on Timer0 ticks and Timer0 is clocked at 1/64th of the CPU frequency. Although a microseconds() function may be more aesthetically pleasing, the implementation essentially "wastes" a portion of the 32-bit value range. A function that returns Timer0 ticks will have the same resolution as a microseconds() function but will have a larger useful range. Moreover, the range will be constant irrespective of the CPU speed.

For measuring elapsed time, you can still think in terms of microseconds but convert the desired number of microseconds to Timer0 ticks (with either rounding or truncation as needed) before comparing it to the difference between two readings.

I would like to include a micros() function in the core if you think it's accurate / precise enough to be useful (particularly at 16 and 8 MHz). Can anyone run through an analysis of the resolution you could get from Don's latest function?

The second(), minute(), hour(), etc. functions are not as high of a priority.

I don't think we need to worry about programs that run for more than a week or two. Beyond that, it's okay if you have to do some extra work to make things reliable (e.g. call a function every few days).

So given the 1 to 1 relation of micros and the sum of changes to micros, and their agreement with millis and timer2, I'm fairly confident that this thing works pretty well If there was noise from TCNT0 or elsewhere, I would expect it to show up when I subtract the current micros() from the last micros() and accumulate those differences, but it does not seem to be there.

Do note it rolls over at ~260 seconds at 16mhz. Which isn't so bad considering you can measure the distance to the moon in 3 seconds with a laser or time a sonic echo from 27 miles away with that range.

I do want to do some performance tests with the above changes and the wiring.c changes though, if you wind up polling microseconds then you need a fast microseconds. But we definitely have some good stuff to work with here.

dcb: thanks for doing these calculations. One other thing that would be useful to check is the running total of the difference between the micros delta and 500,000 (i.e. how much does the change in micros differ from the expected change). The running total of micros deltas just says that the values are always increasing, but not necessarily that they do so by the right amount each time. Also, I'm wondering what the minimum increment (i.e. best resolution) you can get from the micros() function is. Is it on the order of 1 us, or more like 10? 100?

Also, I'm worried that if the values returned from micros() don't overflow on a regular data size boundary (e.g. 2^16 or 2^32), they will be difficult to work with. What if keep a running count of the micros() inside the timer0 overflow as we do with millis now? Or can we somehow truncate to, say, an unsigned int so that the overflow works out properly?

Also, I'm wondering what the minimum increment (i.e. best resolution) you can get from the micros() function is.

Even ignoring the amount of time the micros() function call itself takes, the resolution would be capped at 4us (8us on the 8MHz processor) for the simple reason that a single tick of TCNT0 takes 64 * 1/16 =4us.

The last column was the sum of deltas, and the first column is the millis() from timer2, after 168 seconds they are still spot on to the first 6 digits. The sum of deltas has not wandered off and is tracking at the millisecond level accurately, which implies that it is also accurate.

"what the minimum increment" best case is 4us @ 16mhz, but need to test.

"if the values returned from micros() don't overflow on a regular data size boundary (e.g. 2^16 or 2^32), they will be difficult to work with."

Understandable concern. I think there should be a companion elapsedMicros(ulong start, ulong end) function in any event, to detect the overflow and do the right thing so it may be ok if the return value has a more arbitrary overflow point as long as it is predictable.

"What if keep a running count of the micros() inside the timer0 overflow as we do with millis now?"

I'm struggling to see the implementation there.

But I think this problem can be handled nicely by keeping track of the number of calls to timer0ovf and not touching when we update millis. Then Dons function a few posts earlier will work, and it is 5 times faster than the one in the test harness by my measure, and will overflow every 45 days or so.