If an enabled interrupt occurs while the MCU is in a sleep mode, the
MCU wakes up. The MCU is then halted for four cycles in addition to the
start-up time, executes the interrupt routine, and resumes execution from
the instruction following SLEEP. The contents of the Register File and
SRAM are unaltered when the device wakes up from sleep. If a reset occurs
during sleep mode, the MCU wakes up and executes from the Reset Vector.

At least for fast pin changes and repeated sleeps, this is demonstrably not
the case. The test program below can be used to verify this on a standard
Arduino Uno Rev3. I compiled it with avr-gcc 4.8.1. The generated assembly
contains no surprises, but I've included it for completeness. Insert a
wire in GND on the header and another on Digital 10 (aka PB2) an tap them
together for a bit to provide bounces to trigger the problem.

Interestingly, the first trap in the test program is never hit, which means
that the ISR does always run correctly the first time.

Here's what I suspect is going on:

Referring to Fig. 17.1, we see that the pin change interrupt hardware uses
an XOR of the input pin signal with a delay-blocked copy of itself. In this
diagram these signals are referred to as PCINT(0) and pin_sync, respectively.
The first time through this works fine. It also looks like it would work fine
whenever the clk signal has been supplied continually for a while, since then
pin_sync assumes its expected value. The problem is that the sleep presumably
disables propagation of clk into this circuit. At any rate for the deep
sleep modes the clk is completely stopped, so cannot possibly be available.
It is therefore possible for a pin change that occurs when a 'stale' value is
stored in pin_sync to fail to trigger the XOR output when the clk is reapplied.
The four cycle MCU halt mentioned in the above datasheet section is probably
intended to allow the pin change signal to propagate through the circuit
of Fig. 17-1, thereby causing the ISR to fire at startup. But it's quite
possible that that propagation never gets started in the first place.

This assume that the hardware that performs the wake-up is in fact indepent
of the circuitry shown in Fig. 17.1. I believe this must be the case,
because wake-ups are provided even from deep sleep modes where (according
to the datasheet) no clocks are running.

This is basically a more complicated version of the problem that the
datasheet describes in section 17 for level-triggered interrupts (that they
can wake the processor but no trigger an ISR if they aren't held long enough).
The datasheet should document the related pin change interrupt issue as well.
It would probably be best to document the exceptions to the statement made in
the current section 14.2 at that location, or at least mention there that they exist.

First, you need to make your sleeping "bulletproof". There is a race when going to sleep and the wakeup source triggering. Hmmm--but IME the symptoms are sleeping with the fishes forever. IIRC, your symptoms are awakening but not ISR?

The short answer is to put an SEI right before the SLEEP. I'll try to find the thread with more info.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

First, you need to make your sleeping "bulletproof". There is a race when going to sleep and the wakeup source triggering. Hmmm--but IME the symptoms are sleeping with the fishes forever. IIRC, your symptoms are awakening but not ISR?

Yes, that's correct. The program I show is a simplification in which I never disable interrupts, in order to better isolate the particular problem I report. In the real life program where this problem arose, I do disable interrupts and sei() immediately before sleep() as documented in AVR libc manual and in the threads you mention, and I haven't ever gotten a failure-to-wake. When running the simplified test program it's always possible to twiddle the wires some more to get a wake-up. The unexpected thing (datasheet bug I think) is that you don't always get an ISR execution on wake, I think for the reasons I describe above (basically statefulness and clock dependence in the circuit of datasheet Fig. 17.1).

Well, if you are going to post a test program that isn't bulletproof, I'd have a hard time to continue further with testing and so forth. Probably just me.

As I mentioned in your other thread, I use the pin-change just to wake up and do nothing in the ISR but disable sleep. Then, I go through my normal debounce mechanism. So even if a fals hit, it will be a few 10s of milliseconds before going back to sleep.

Have you looked at your triggering "noise" on a 'scope? What are the characteristics? Nothing beyond Absolute Maximum Ratings with static or other noise spikes?

As you are going back to sleep within a few microseconds, and aren't clearing the PCIFR, my only guess is a race between the noise and the sleep. I forget whether wakeup needs an edge on the flag or not.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Well, if you are going to post a test program that isn't bulletproof, I'd have a hard time to continue further with testing and so forth. Probably just me.

I'm not clear what you want to make it more bulletproof. Twiddle I-bit seems to me to be a confounding factor, i.e. less bulletproof not more, but

in any case what I originally posted in the other thread does do that.

Quote:

As I mentioned in your other thread, I use the pin-change just to wake up and do nothing in the ISR but disable sleep. Then, I go through my normal debounce mechanism. So even if a fals hit, it will be a few 10s of milliseconds before going back to sleep.

I can work around it one way or another, but I'd like to know for sure what's actually going on here. With current behavior it's not possible to reliably differentiate between WDT wakes and pin change wakes in my original application, which is annoying and will mean slightly worse performance for the product.

Quote:

Have you looked at your triggering "noise" on a 'scope? What are the characteristics? Nothing beyond Absolute Maximum Ratings with static or other noise spikes?

I haven't, and you're right that I should. I'll report back. It's the Arduino's own GND though, so there shouldn't by anything there. If anything I'd expect a reset, which

I wouldn't expect to put me in the trap where got_pbpci isn't set after wake.

Quote:

As you are going back to sleep within a few microseconds, and aren't clearing the PCIFR, my only guess is a race between the noise and the sleep. I forget whether wakeup needs an edge on the flag or not.

How can wake possibly depend on PCIFR, when PCIFR needs many clk cycles to get set and wake has to happen from modes where the datasheet says no clks are running (section 14.6)?

The wake must happen some other way , and the PCIFR is supposed to always correspond but does not.

If I am reading the datasheet correctly for the PC Interrupt schematic, a logic level change on a pin enabled for PCI will wake the cpu, but the new logic level must be held for at least one clock cycle to latch the change and cause an interrupt to be generated. In Idle mode, the min time would be 50ns at 20 MHz, longer for slower clock speeds, but could be much longer at deeper sleep modes as it would have to be held until a clock was available. App notes 1200/1201 describe the interrupt requirements, but are still vague on the details of the timing.

If I am reading the datasheet correctly for the PC Interrupt schematic, a logic level change on a pin enabled for PCI will wake the cpu, but the new logic level must be held for at least one clock cycle to latch the change and cause an interrupt to be generated.

That's supposedly only for level interrupts:

If a level triggered interrupt is used for wake-up from Power-down, the required level must be held long enough for the MCU to complete the wake-up to trigger the level interrupt. If the level disappears before the end of the Start-up Time, the MCU will still wake up, but no interrupt will be generated. ”External Interrupts” on page 71. The start-up time is defined by the SUT and CKSEL Fuses as described in ”System Clock and Clock Options” on page 26.

Pin change interrupts on PCINT23...0 are detected asynchronously. This implies that these interrupts can be used for waking the part also from sleep modes other than Idle mode.

The schematic for pin change interrupts isn't clear how asynchronous detection is achieved, but only shows timing w.r.t. the I/O clock. Since in power down mode the I/O clock isn't running, this schematic doesn't tell the whole story. An enabled pin change event triggers a wakeup (through a mechanism not completely described by that schematic), then sets the system and other clocks into motion. On an UNO, the CKSEL/SUT fuses select a start-up of 16K cycles from power-down. That's 1 ms on the 16 MHz UNO. Presumably the I/O clock doesn't start until after that. If bounce on the pin-change-enabled pin which caused the wakeup results in that I/O returning to it's pre-wakeup state, the logic which normally results in setting of PCIFn will not be activated, since pcint_in_(0) will never present a pulse. That pulse is a result of the interaction between the signal, the two flip-flops, the XOR gate, and the I/O clock. If, due to bounce, PCINT(0) is never seen to change from it's pre-wake-even state while the I/O clock is running, only the wake will occur, not the interrupt, since PCIF won't be set. At least, at first.

On the above UNO example, the CPU will resume from the sleep instruction 4 clock cycles after the wakeup. Those 4 clock cycles are precisely the number of cycles required by the PCINT logic to set PCIFn. I'd hazard a guess that this isn't a coincidence.

As per the previously posted .lss, the first instruction after wakeup will disable interrupts globally. Any setting of PCIFn after that will not result in the execution of ISR. There are then 11 more cycles before got_pbpci is queried. If it tests as 0 (as would be the case if the interrupt did not fire i.e. PCIFn did not get set during those 4 clock cycles between the time the 16K wakeup period and the start of the cpu clock 4 cycles later), then there will be another 7 cycles before PCIF0 is queried. So there is an 18 cycle window during which it's possible for PCIFn to get set by the now active pin change logic. 18 cycles represents 1.125 us on the UNO. Doesn't seem like much compared to the 1000 us wakeup time, but that ratio doesn't necessarily imply a statistical likelihood of seeing this behaviour only 0.1125% of the time. The exact behaviour to expect would be dependent upon the nature of the bounce on the signal waking the device.

I'd gather that using a faster startup time might improve matters, giving a bouncing signal less time to fall back to the pre-wakeup state before the synchroniser gets a hold of it. Using the 8 MHz internal RC would give you a 6 clock wakeup period in all cases. At 8 MHz, that's only 750 ns. I'd suspect that most any physically bouncing signal would not act that quickly.

EDIT: Hmmm... I'd conflated the OP's situation using idle mode with the referenced threads which used power down mode. The above analysis is perhaps incomplete, since the I/O clock continues to run in idle mode.

"Experience is what enables you to recognise a mistake the second time you make it."

EDIT: Hmmm... I'd conflated the OP's situation using idle mode with the referenced threads which used power down mode. The above analysis is perhaps incomplete, since the I/O clock continues to run in idle mode.

It runs to save start-up time, but I'm not sure that means it's necessarily routed to the pin change hardware in idle mode. It doesn't clock instructions in idle, probably other subsystems are cut off as well. I haven't noticed any docs on this question.

If the problem is indeed some stale value being latched in the start of the pipeline of 17.1 due to the clock dropping out, it should be possible to suppress it by somehow flushing that pipeline before going to sleep, thereby restoring the start-up state in which things do always seem to work as advertised. To do this, one would have to ensure that no new bounce input propagated into the circuit during the flush. I guess this might be done by reconfiguring the pin as output driving low (to avoid damage, given the GND value of the input wire) and _Nop() a bit, but I haven't tried it yet because it seems sort of wrong even as a test and is certainly not a deployable approach.

The only consequence of idle mode is that the cpu clock is halted. All other clock domains continue to run.

And there is no 'pipeline'. The synchroniser is simply two flip-flops arranged in series such that they turn an edge into a pulse.

I still suspect that this is all due to signal bounce i.e. noise on the I/O pin used to wake. With that suspicion in mind, I've run your test code using a signal generator as the source connected to PB2. Running at 300 Hz, there are 600 edges every second. It's been running since last night. 17 million edges and wake-ups. Not a single missed interrupt.

While it would be nice if noise could be handled more gracefully, the solution is clear: better hardware design. Simply hooking up a tact switch to a naked I/O pin is a bad idea for lots of other reasons. A cap and resistor will likely solve your problem.

Another effective means of preventing this problem using software only is to properly debounce the button press. Even something as simple as adding a delay in the pin-change interrupt is sufficient:

With this I have been unable to reproduce the problem. The delay could be added between the sleep and the cli, but then all interrupt sources would experience that delay. If your goal is to differentiate between wakeups due to WDT vs button presses, better to put the delay in the pin change interrupt to avoid spending more time awake than necessary. However, proper hardware design i.e. signal conditioning on your pin change input is a better idea.

"Experience is what enables you to recognise a mistake the second time you make it."

I wonder if there's a table missing from the datasheet? In the one for the 1284P we have this...

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

I'm pretty sure that figure is quoted in the 328 datasheet, too. Can't check here. However that's not the issue per se. The complaint is that this lower limit doesn't seem to apply to the wake-up mechanism. Any edge will initiate a wake-up. This makes sense since the 50ns is derived from the max clock speed of 20MHz. Running at 1MHz would result in a minimum pulse width requirement of 1000ns. But the wakeup mechanism seems to be purely edge triggered. It would have to be in order to wake from sleep modes where the I/O clock is stopped.

"Experience is what enables you to recognise a mistake the second time you make it."

Yes I know, and that's how I have tested his code. While this will likely result in significantly more noise and bounce than most buttons or switches, it does not preclude the problem with 'real' switch hardware. Some can be quite terrible w.r.t. bounce.

"Experience is what enables you to recognise a mistake the second time you make it."

But the wakeup mechanism seems to be purely edge triggered. It would have to be in order to wake from sleep modes where the I/O clock is stopped.

But the logic diagram, although conceptual, clearly shows the use of a clock signal. I suspect that it uses a version of CLKi/o which doesn't get stopped.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

In power-down and power-save modes, the oscillator is stopped (unless, of course, you are using an external clock). It would not otherwise be possible to get down to 100nA. So where does this special version of CLKI/O come from?

"Experience is what enables you to recognise a mistake the second time you make it."

In power-down and power-save modes, the oscillator is stopped (unless, of course, you are using an external clock). It would not otherwise be possible to get down to 100nA. So where does this special version of CLKI/O come from?

Good point.

It does rather suggest that there is a lot more going on than is apparent in the timing diagram.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

Whilst this statement is about the external interrupts, I'd be surprised if this doesn't apply to pin change as well. Any logic attached to a pin is going to have propagation delays. Whilst 50ns is an age in logic we are talking about a uC here and not a GP logic chip.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

It's been a long day and I'm too tired to think any more but I wonder if any of these are relevant?

Quote:

1.5. Important Points to be Noted when using External Interrupts1. If a level triggered interrupt is used for waking up the device from Power-down, the required levelmust be held long enough for the MCU to complete the wake-up, to trigger the level interrupt. If thelevel disappears before the end of the start-up time, the MCU will still wake up, but no interrupt willbe generated.2. Both INT7:0 and pin change interrupts can be used to wake up the device from any sleep modes.But INT7:4 should be configured to sense level interrupt to wake up the device from sleep modeother than idle mode.3. If enabled, a level triggered interrupt will generate an interrupt request as long as the pin is heldlow.4. When changing the ISCn bit, an interrupt can occur. Therefore, it is recommended to first disableINTn by clearing its Interrupt Enable bit in the EIMSK Register.5. Before enabling an interrupt, it is recommended to clear the flag bit of the corresponding interruptbecause when the flag bit is set, the interrupt will be triggered the moment we enable the interrupt.6. If enabled, interrupts will be triggered even when the pins are configured as outputs. This providesa way of generating a software interrupt.7. If a logic high level (“one”) is present on an asynchronous external interrupt pin configured as“Interrupt on Rising Edge, Falling Edge, or Any Logic Change on Pin” while the external interrupt isnot enabled, the corresponding External Interrupt Flag will be set when resuming from the powerdown, power-save, and standby sleep modes.8. Once the CPU enters the ISR, the global interrupt enable bit (I-bit) in SREG will be cleared so thatall other interrupts are disabled. In order to use nested interrupts, the I-bit has to be set by softwarewhen the CPU enters an ISR.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

Slightly modified from the code in the OP's other thread. Specifically:

PCIFRn is cleared before enabling interrupts and sleeping

100 ms delay in the pin change ISR

PB5 is not toggled forever

To elicit the problematic behaviour, remove the delay from the ISR.

Whilst this statement is about the external interrupts, I'd be surprised if this doesn't apply to pin change as well.

I'd agree, but this is in reference to triggering an interrupt, not a wakeup, and is linked to the I/O clock speed. 50 ns is the length of one cycle at 20 MHz. I'd hazard a guess that the real minimum is not an absolute 50 ns, but whatever the length of one cycle is at the system clock speed. That conclusion is evident from the schematic and timing diagram in post #11. But again, this cannot be a limit for wakeup events. With no clock running, wakeup cannot be contingent upon the same 1/F_OSC limit.

I lack the equipment to capture a bouncing signal with sufficient resolution to really see what's going on. However, if anyone else does, I might suggest capturing three channels:

It is, but it's also the sort of propagation delay you'd see on a CMOS gate running on 5V, or the minimum clock pulse width on a CMOS flip-flop at 5V.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

It is, but it's also the sort of propagation delay you'd see on a CMOS gate running on 5V, or the minimum clock pulse width on a CMOS flip-flop at 5V.

I'm no expert on CMOS logic, but I'd guess this not to be the case. I'd expect propagation delays in the low single-digit nanoseconds, or sub-nanosecond i.e. picosecond range. Were it really 50 ns, how on earth is a CMOS AVR core meant to operate at 20 MHz? That's a propagation delay as long as one full CPU cycle of the system clock. Google searches for 'typical cmos propagation delay' show figures around 100 picoseconds or lower.

got another AVR family sample nearby to try it on?

At the moment, no. The only other model I have on hand is t85, but not here at home. I likely won't be at my shop for a few days at least.

I.e., is the problem just in this model/die silicon, or inherent in AVR8 pin-change handling?

My bet is that it's inherent.

"Experience is what enables you to recognise a mistake the second time you make it."

At least for fast pin changes and repeated sleeps, this is demonstrably not

the case.

How fast is fast?

What is the shortest pulse that you are testing?

From section 13 of the datasheet: (ATmega48A/PA/88A/PA/168A/PA/328/P 8271G–AVR–02/2013)

Note: Note that if a level triggered interrupt is used for wake-up from Power-down, the required level must be held long
enough for the MCU to complete the wake-up to trigger the level interrupt. If the level disappears before the end of the
Start-up Time, the MCU will still wake up, but no interrupt will be generated.

Were we talking about discrete CMOS logic chips in a modern process then I'd agree with you. But we're talking about a 20MHz processor made in a relatively old (=slow) process.

The datasheet is very light on details but look at the analog comparator. A 500ns propagation delay!!!!!!! The datasheet is littered with delays in the tens of nanoseconds because the chip needs to be low power which means a larger chip process to keep the leakage low which means a slower chip with slower edges.

Some newer chips have logic you can get at. Sadly there's no numbers in the 817 datasheet for the configurable logic cells but there are for the cells in the newer PIC16s. The typical delay from input to output for a non-clocked signal is in the order of 20ns+ for a chip clocked at 32MHz maximum which executes at 8MIPS.

Without a lot more information from Atmel this is going to be one of those 'features' that we never really understand.

"This forum helps those that help themselves."

"How have you proved that your chip is running at xxMHz?" - Me

"If you think you need floating point to solve the problem then you don't understand the problem. If you really do need floating point then you have a problem you do not understand." - Heater's ex-boss

Again, I'm not an expert in CMOS or any other process, but I'm struggling to understand how a core built with synchronous logic can run at X Hz when the inherent propagation delay is 1/X seconds. What's more, the AVR core can be clocked even higher. Quite a bit higher, in fact. The limiting factor is flash access, not core logic. How can synchronous logic be made to work reliably when signal propagation is as long or longer than the system clock's period?

I agree that the timing diagram is incomplete, but only inasmuch as it omits details of the wakeup machanism. It seems complete w.r.t. the interrupt mechanism.

"Experience is what enables you to recognise a mistake the second time you make it."

I ran another test with the above test code but at 62.5 kHz system clock speed. I then fed 2 us pulses into PB2. This is 40x longer than the presumed 50 ns lower limit suggested by most AVR datasheets. It is, however, only one eighth of a cycle at 62.5 kHz, which is 16 us. Without exception, the 2000 ns pulse results in a wakeup, but fails to trigger the ISR. This suggests to me that the 50 ns lower limit quoted in the datasheet is a consequence not of an inherent propagation delay, but of the one-cycle limit imposed by the synchroniser, and that the quoted figure is derived from the device's maximum spec'd clock speed. However, I acknowledge that this is not conclusive proof. It does, at least, appear to be a shortcoming of the datasheet, although not the one represented in the OP.

"Experience is what enables you to recognise a mistake the second time you make it."

Again, I'm not an expert in CMOS or any other process, but I'm struggling to understand how a core built with synchronous logic can run at X Hz when the inherent propagation delay is 1/X seconds. What's more, the AVR core can be clocked even higher. Quite a bit higher, in fact. The limiting factor is flash access, not core logic. How can synchronous logic be made to work reliably when signal propagation is as long or longer than the system clock's period? I agree that the timing diagram is incomplete, but only inasmuch as it omits details of the wakeup machanism. It seems complete w.r.t. the interrupt mechanism.

The fact that 17-1 fires pcint_in_(0) at the same instant the input goes high is pretty weird, especially given the presence of two flops before that point.

Not at all.

Note the XOR gate, and the fact that one of its inputs is taken directly from PCINT(0).

Remember:

The synchroniser is simply two flip-flops arranged in series such that they turn an edge into a pulse.

... and:

That pulse is a result of the interaction between the signal, the two flip-flops, the XOR gate, and the I/O clock.

The purpose of the the first two flip-flops (apart from coping with metastability) is to provide a delayed copy of the input signal.

Since an XOR gate's truth table is:

... the output of the XOR will be '1' only for the duration of time that the raw input signal differs from the delayed input signal. The effect is that the output of XOR gate is a pulse whose rising edge is synchronous with the edge presented on the input.

However, the input signal must remain stable for at least one full cycle, or it may be missed. The pulse generated by the XOR gate will never be longer than the delay provided by the first two flip flops, but it might be shorter. It >>will<< be if the input pulse is shorter. That shorter pulse coming out of the XOR will also be 'missed' by the downstream flip flops, since it can only come when those flip flops are already latched by clk. That's what my last test at 62.5 kHz demonstrates.

Including the very first pulse (i.e. ISR never fires even once)? I ask because I think the code you're referring to here is only testing the flag that gets cleared again each time.

Yes. You can try it yourself.

The code I'm referring to is the code I most recently posted, and it is directly derived from the code in your other thread.

It is an interesting test, but it is somewhat contrived. No real app would do what it does. I think it has been helpful in characterising the behaviour of the AVR during pin change wakeups, but it has not in my view revealed any flaws in design. The only thing I might add to the datasheet would be a clarification of the 50 ns lower limit to correctly quote 'one cycle at F_OSC' instead, and to include pin change interrupts (likely all external interrupts) in the caveat currently reserved only for level interrupts i.e. a wakeup signal must persist until the wakeup procedure is complete and the cpu resumes execution. This requirement is evident in the partial schematics and timing diagrams provided, if not in the text.

There is only one issue at work here, which is that the wakeup mechanism is edge triggered and fully asynchronous, but the interrupt/flag mechanism is not. The wakeup mechanism >>must<< be so, because it works in the absence of any clock. The interrupt/flag mechanism is pulse triggered and not fully asynchronous. It >>must<< be so, as it is expressed within a synchronous design. Hence the need for a synchroniser.

These facts are not an impediment to your goal. You wish to differentiate between a pin-change interrupt and a WDT wakeup, to quickly decide what action should be taken and resume sleeping if you determine that the wakeup was not due to a button press. That can be achieved by setting a flag in WDT_vect and testing it after wakeup. If the flag is not set, the wakeup occurred for some other reason. Provided you have only one other wakeup source i.e. the pin change interrupt, then that >>must<< be the source of the wakeup, regardless of whether or not the corresponding interrupt ran. Once a pin change has been identified as a wakeup source, your app can take the appropriate action to debounce and process that event.

"Experience is what enables you to recognise a mistake the second time you make it."

The fact that 17-1 fires pcint_in_(0) at the same instant the input goes high is pretty weird, especially given the presence of two flops before that point.

Not at all.

Note the XOR gate, and the fact that one of its inputs is taken directly from PCINT(0).

Remember:

The synchroniser is simply two flip-flops arranged in series such that they turn an edge into a pulse.

... and:

That pulse is a result of the interaction between the signal, the two flip-flops, the XOR gate, and the I/O clock.

The weird thing about 17.1 is precisely the fact that doesn't use the clock at all in making a determination about the input value, and then mixes the output with other logic (PCMSK). It looks like the (incorrect) greedy path synchronizer of Fourteen Ways to Fool Your Synchronizer.

Quote:

The purpose of the the first two flip-flops (apart from coping with metastability) is to provide a delayed copy of the input signal.

If the timing diagram is correct and 17.1 copes with metastability at all, I don't think it does it in the first two flops. See Figs. 8 and 9 of

The arbitrated output is never driven in the same clock cycle as the input. In general, arbitrators trade increased minimum time of operation for increased settling-confidence within that time window.

Quote:

However, the input signal must remain stable for at least one full cycle, or it may be missed. The pulse generated by the XOR gate will never be longer than the delay provided by the first two flip flops, but it might be shorter. It >>will<< be if the input pulse is shorter. That shorter pulse coming out of the XOR will also be 'missed' by the downstream flip flops, since it can only come when those flip flops are already latched by clk. That's what my last test at 62.5 kHz demonstrates.

Including the very first pulse (i.e. ISR never fires even once)? I ask because I think the code you're referring to here is only testing the flag that gets cleared again each time.

Yes. You can try it yourself.

I'm not set up for it but I'll take your word, thx very much.

Quote:

There is only one issue at work here, which is that the wakeup mechanism is edge triggered and fully asynchronous, but the interrupt/flag mechanism is not. The wakeup mechanism >>must<< be so, because it works in the absence of any clock. The interrupt/flag mechanism is pulse triggered and not fully asynchronous. It >>must<< be so, as it is expressed within a synchronous design. Hence the need for a synchroniser.

Like I said, I don't see why. The wake-up signal fired (somehow) and so far as I understand there's no reason in principle that the ISR mechanism shouldn't be able to be made aware of this. The chip just doesn't do it. It's a bug with respect to the datasheet, and should be fixed at one end or the other. I agree with your assessment that fixing the datasheets is the way, including both the ones that wrongly give a fixed time and the ones that don't mention the issue at all.

Quote:

These facts are not an impediment to your goal. You wish to differentiate between a pin-change interrupt and a WDT wakeup, to quickly decide what action should be taken and resume sleeping if you determine that the wakeup was not due to a button press. That can be achieved by setting a flag in WDT_vect and testing it after wakeup. If the flag is not set, the wakeup occurred for some other reason. Provided you have only one other wakeup source i.e. the pin change interrupt, then that >>must<< be the source of the wakeup, regardless of whether or not the corresponding interrupt ran. Once a pin change has been identified as a wakeup source, your app can take the appropriate action to debounce and process that event.

Good idea and probably would work, but given this issue I'm going to be paranoid and never try to deduce anything at all about wake-up sources based on ISR actions.

I agree with Joey's take on the situation. The sleep logic is probably a set/reset flip/flop. Once triggered, the cpu starts up and then the interrupt logic comes into play. There are two mechanisms at play and you fall into the crack between them. I think you'll find other chips will work similarly.

Perhaps, but I think it's a rather specious argument. Maybe some >>other<< interrupt mechanism could do it that way, but not the one depicted. I'm struggling to understand why it matters so much? This all seems to be tilting at windmills. The only consequence of the 'problem' is a wakeup with no associated interrupt flag. Solutions exist right now, without the need to redesign the silicon. Proper signal conditioning in hardware (which any good circuit design should do anyway), and proper handling in software. The only offence here is a minor omission in the datasheet. To be honest, of the datasheet omissions we've all seen this is a rather benign one.

I'm going to be paranoid and never try to deduce anything at all about wake-up sources based on ISR actions.

Suit yourself, but I like to use all the crayons in the box. The only potential issue here is with external interrupts. The software fix I suggested involves the WDT interrupt, which is internally generated. It is not subject to the same issue.

And by the way, it's not interrupts per se which are in question, but the setting (or not) of the interrupt flags which govern them.

"Experience is what enables you to recognise a mistake the second time you make it."

Perhaps, but I think it's a rather specious argument. Maybe some >>other<< interrupt mechanism could do it that way, but not the one depicted. I'm struggling to understand why it matters so much? This all seems to be tilting at windmills. The only consequence of the 'problem' is a wakeup with no associated interrupt flag. Solutions exist right now, without the need to redesign the silicon. Proper signal conditioning in hardware (which any good circuit design should do anyway), and proper handling in software. The only offence here is a minor omission in the datasheet. To be honest, of the datasheet omissions we've all seen this is a rather benign one.

Well we're repeating ourselves, and don't actually disagree much about what should be done. Perhaps I'm a nut for caring but I like to think of uCs as tiny perfect little things. In this case the pin in question is being exposed directly to arbitrary client hardware that I don't control. Probably it should have got better filtering or something but it's too late for that now for this design, it will just have to come with more demanding requirement for the way the pin is used.

I emailed Atmel to see what they say, so it can go in the errata thread maybe. Thanks again for all your help.

These facts are not an impediment to your goal. You wish to differentiate between a pin-change interrupt and a WDT wakeup, to quickly decide what action should be taken and resume sleeping if you determine that the wakeup was not due to a button press. That can be achieved by setting a flag in WDT_vect and testing it after wakeup. If the flag is not set, the wakeup occurred for some other reason. Provided you have only one other wakeup source i.e. the pin change interrupt, then that >>must<< be the source of the wakeup, regardless of whether or not the corresponding interrupt ran. Once a pin change has been identified as a wakeup source, your app can take the appropriate action to debounce and process that event.

Incidentally, this approach is less useful that in sounds, because when the flag *is* set in WDT there's still no guarantee that a simultaneous pin change hasn't occurred. So for at least half the cases all the diagnosis has to be done entirely outside the ISRs anyway.

Incidentally, this approach is less useful that in sounds, because when the flag *is* set in WDT there's still no guarantee that a simultaneous pin change hasn't occurred. So for at least half the cases all the diagnosis has to be done entirely outside the ISRs anyway.

Even if a simultaneous pin change has occurred, once you go back to sleep from the WDT wakeup, the device will wakeup immediately again as a result of the pending pin change flag.

And >>all<< the diagnosis is done entirely outside the ISR. That is usually appropriate. The ISR (in both the case of the WDT and pin change) should be used only to take note of the event, not to process it. You >>may<< wish to do some processing in an ISR if you need to do timely processing of that event. If the input is a button or other bouncing signal (limit switch, etc), you certainly don't need to be timely. If you're saying that the incoming signal doesn't bounce per se but could have analog signal components or frequency components higher than allowed by the AVR hardware (as mentioned, the minimum pulse width which can be reliably detected is 50 ns [or more likley 1/F_OSC]), then it would seem that is a consequence of your (or your client's) hardware design. You've exceeded the specifications of the device. You may require external signal conditioning in order to reliably detect these.

If it's just bouncing a physical switch or the like, then something as simple as a busy loop will suffice:

You could be more sophisticated about it if you wanted to perform useful work while you waited for the debouncing to complete, or if you wanted to do some housekeeping with every WDT wakeup as well, or perhaps only after every X WDT wakeups. Or perhaps you want to save a bit more power and prescale the system clock to /256 during the busy loop. Those are exercises left to the reader.

EDIT: code fix.

"Experience is what enables you to recognise a mistake the second time you make it."