Accurate timers with an AVR

An awful lot of microcontroller projects use timers to repeat an action every few minutes, hours, or days. While these timers can be as accurate as a cheap digital wrist watch, there are times when you need a microcontroller’s timer to measure exactly, losing no more than a few milliseconds a day. It’s not very hard to get a timer to this level as accuracy, as [Karl] shows us in a tutorial.

The problem with keeping time with a microcontroller has to do with the crystal, clock frequency, and hardware prescalers of your chip of choice. [Karl] started his project with an ATMega168 and a 20 MHz crystal and the prescaler set at 256. This made the 78.125 interrupts per second, but the lack of floating point arithmetic means one second for the microcontroller will be 0.9984 seconds to you and me.

[Karl]’s solution to this problem was to have the ATMega count out 78 interrupts per second for seven seconds, then count out 79 interrupts for one second. It’s not terribly complicated, and now [Karl]’s timers are as accurate as the crystal used for the ‘168’s clock.

Awesome! How does this compare with the Arduino millis() or the time library? Or is this what the article means by ‘accurate as a cheap wrist watch’. I’m interested as I’m building a clock around an Arduino, and debating the cost effectiveness of an RTC module

Perfect solution, produces an even number output and requires no processing time from the chip bouncing back and forth on how much it counts every second. [Karl] simply didn’t look/think hard enough for such a solution.

As a proud Englishman myself I know two things – wogs begin at Calais and England is not part of Europe. And yes, that’s tongue in cheek (I’m not nearly racist enough to actually believe that). But I meant continental Europe – here’s a map of which country uses what.

Depends on your definition of “race” – technically we’re all of the human race with subdivided cultures. It’s like saying “i hate russisans”, makes you racist. It doesn’t, it makes you a dumbass, but not a racist.

Been there done that.. temperature changes gain or lose a few seconds over time Ive pretty much given up on any accurate timing without a RTC on any micro you can get close with stuff like this but I’d recommend leaving a way to easily change the trim values w/o a reprogram

I know MSP430s with an AD converter (which is most of them) have an internal temp sensor. Do AVRs? You can get the tempco of most crystals from the manufacturer and compute the correction on the fly. And if you wanted to be really fancy, you could allow external updates of the ‘correct’ time to discipline the oscillator with a PLL.

Actually there are several errors here, both in article as the original piece. Karl states; “There is no combination of prescalers and reload value that provides an accurate one millisecond interval.”. Which is wrong because he could have used a prescaler of 1 which would give 20000000 interrupts per second or 20000 per millisecond. Of course this doesn’t say anything on the accuracy of the crystal, if i am correct the 32Khz clock crystals are supposed to have better accuracy. It’s possible to run the micro on a 20Mhz crystal and use a second (32Khz) crystal just for timekeeping.

Unfortunately, unlike MSP’s, I think most AVRs only have 8 bit timer/comparators. On the other hand you could compare on 200 for a 10 microsecond counter and then use 10 usec intervals as the basic tick of your timer.

Crystal accuracy is measured in PartsPerMillion (PPM) over the whole temperature range. Usually they are in the 200PPM range, or you can get them at 100PPM for higher accuracy. Which means you will lose 200 seconds every 1000000 seconds, in the worst case. Which means you can lose about 17 seconds every day in the worst case.

I’ve done tests at stable temperature which gave me a 40ms lose every day at 20C with a 100PPM crystal. But real world conditions can vary.

I’m running ntpd on a few computers in a non-climate-controlled location, and I don’t see anywhere near 200PPM variation over about 5C to 30C, which is probably narrower than the range you’re talking about. This is using the ~3.5 MHz ACPI timecounter, so a cheap crystal. Actually, it’s closer to “a few” PPM over that range. Now, I *do* see a 30-60 PPM offset in the center frequency. The spec you’re looking at is probably the rated accuracy over the entire temperature range PLUS the frequency offset where the tempco is zero, plus things like stray capacitance, etc. If frequency stability over a wide temperature range is important, you should consider adding temperature compensation (check with the manufacturer for the tempco curve), a TCXO, or an OCXO.

TL;DR: The bigger problem is probably the center frequency of your crystal.

The solution to the 78.125 (or 78,125, depending on what side of the pond) interrupt problem is quite simple.

If you attempt to count seconds, yes, you’ll have to deal with the extra 1/8th of a tick each second, but just count the raw number of ticks. Then display the time by multiplying the number of ticks by 1000 and dividing by 78125. Now, the 1/8 of a tick problem will effect the calculated time, but never more than that 1/8th of a tick, but won’t accumulate second by second so your timepiece does not drift.

Of course, the accuracy of the 20MHz clock becomes a problem. If you want 1 second accuracy over a year, you need the frequency accurate to within 1 in 31 million plus, and THAT’s no trivial matter.

The AC Mains frequency varies a relatively large amount (for time-keeping) depending on load. Actually the grid suppliers (Public Electric Companies) use the variation in the mains frequency to determine grid load – and try to correct for it. IIRC there has been some debate about whether the grid suppliers should intentionally run the grid at a higher frequency when there is low-load to compensate for lower frequency during peak demand so the error averages out. Today, in the 1st-World at-least, grid suppliers compare their mains frequency to a GPS reference. IIRC (again) a company called “Aribter” supplies many of the GPS clock references to grid suppliers. Where I live (in the 3rd-World), the Electric Company’s line frequency is horrible – all over the map.

IMHO It’s almost always best to run a timer in free-running mode and set a timer match interrupt at the correct intervals. That way you can get 16-bit or (even 32-bit accuracy) by calculating with fractional arithmetic with an arbitrary clock frequency and prescalar. Consider; a 20MHz clock with the prescalar at 256. There are 78.125 ticks per millisecond or (obviously) 20,000 ticks per 256 milliseconds. Thus if you have a 16-bit value, gNextOCR, then on each interrupt:

will correctly (and easily) give the next 8-bit OCRA value with the fractional tick taken into consideration. Adjusting gMilliSecondPeriod by +/1 will adjust the period by 1/65536 or 1.3s per day, better than the adjustment with a standard 32KHz crystal. 32-bit variables will give a corresponding finer adjustment.

The same accuracy can be achieved, e.g. with a 1MHz clock and a prescalar set to 64, giving 15.625 ticks per millisecond, here gMilliSecondPeriod would be 4000 by default and the same code above will give an OCRA update with *exactly* the same level of accuracy. The key thing is to make simple fractional arithmetic do the work for you; you can have virtually any clock rate and prescalar; achieve the same level of overall accuracy and adjustment, though obviously of course you can’t have millisecond interrupts if your combination gives a timer frequency <1KHz.

First post has a method that is conceptually easy (Think of it as Bresenham’s line algorithm for time)

Everyone else sits here arguing about commas as decimal places rather than pointing out that 78.125 DOES NOT NEED FLOATING POINT MATHS to add up.

POINT 125
What about some simple fixed point boy
No need to go running like a bull in a china shop straight for the clitoris//UM floating point unit.

2nd

NOW if we are really all about HACK-a-day why are we just suggesting straight away “Buy some companies already made accurate but proprietary RTC”

Why doesn’t someone suggest instead of trim in software that trimming one of the load caps on the crystal could make it better that the 20ppm base at STP.

If you really wanted to be fancy use a varactor under software control via a D/A converter. Then the CPU could measure the temp and try trim out that pesky 1ppm per degree celcius drift of the crystal.

Use an oven to keep the crystal temperature constant. If the project is mains powered, you could attach the crystal to the transformer, which should have a constant temperature if the current draw is reasonably constant.

There’s a coincidence: I just finished reading ” Crystal Oscillator Design and Temperature Compensation” by Marvin Frerking a couple days ago.

His favorite technique is to do a calibration run over the whole range of temperatures to learn how much offset you get at each temperature. You store those in an array, add a thermometer to your circuit, then use the microcontroller to control a varactor that pulls the crystal’s frequency enough to cancel the thermal effects. IIRC he said a well-tuned unit could get the short-term drift down to 10^-8.

Our users been seeing some great ranking improvements after they’ve started varying thier link building campaigns with the BHS VPS. We have some of the most popular SEO tools available, and if you’re interested feel free to check us out at http://goo.gl/55Xm4