Securing interrupt-driven timers

In order to ensure that the read operation on an timer+interrupt-driven clock counter variable doesn’t get messed up, instead of disabling ALL interrupts, I decided to simply shut off the timer, do the read, and then turn it back on. This should keep the clock counter variable safe while allowing other interrupts to jump in and do their thing during the clock read. Is this insane?

In a word (and in my opinion): yes.

The clock read itself takes less time than an interrupt service routine. Disabling interrupts doesn’t ignore them completely; it just defers them until they are reenabled. If you turn off the timer, you lose time; your counter variable runs slow. The more often you read it, the slower it gets.

Thanks for helping. I panic easily.

That’s the kicker right there for me. I’ll change it disable all interrupts.

Also, I currently have the interrupt handler for the clock written as a function that will be called from the interrupt service routine every time it runs. This function does the check to see if the timer triggered the interrupt and, if so, increments the timer. Would it be significantly better to hack out a macro (to put the code directly in the ISR) for the check/increment or is it fine as it is? Also, no other interrupts can occur until the ISR returns, right?

An 8 bit timer is one read, this can’t be effected by the timer running, its an atomic action.

A 16bit timer (e.g. timer1) set in 16 bit mode allows a read of TMR1L to cause TMR1H to be latched for you so you can read the lo/hi bytes and maintain relational integrity. However, the code still could be interrupted between the two reads and the interrupt could be an interrupt for the same timer we are reading. The interrupt could end up reading or resetting the timer and cause a new value to be in the TMR1H latch register as a result - we return from the interrupt handler and complete the read of TMR1H - the result is lo/hi byte reads can end up skewed (but not very often which makes this difficult to debug). To guarantee that this does not happen, you need to disable low priority interrupts during the read timer counter instruction sequences. Disabling the interrupts just adds to the interrupt latency (which isn’t predicatable anyway because a high priority interrupt can jump in front of a low priority interrupt).

Disable_LoPriorityINT;
save_lo = TMR1L;
save_hi = TMR1H;
Enable_LoPriorityINT;

This also applies to user created monotonically increasing sysclock variables. For example, a 32bit sysclock variable has a value of 0x00.10.FF.FF, we read hi to lo 0x00, 0x10, (get interrupted here by timer interrupt), 0x00, 0x00 because the interrupt added 1 to the sysclock to get the value of 0x00.11.00.00 - the results we read for processing 0x00.10.00.00 - as we had already read the upper two bytes - is invalid and could generate bad results later on depending on what we’re using it for. Reading lo->hi has a similar issue, could end up with 0x0011FFFF in the above example. The only way to prevent this is to protect reads to it by disabling interrupts. Any variable that is used by an interrupt routine is a volatile and as the name suggests can be changing in value while we’re attempting to use it (applies to multiple byte variables only).

One thing I’m using to eliminate some of these problems is to create interval_timer# since that is what I use the sysclock for a lot, like I need a 48ms timer to prevent using a sensor again due to duty cycle requirements. These interval timers have code at interrupt time of:

if (timer_interval#) --timer_interval#; // if non-zero, decrement

The timers will run down to zero and stop. My code sets the time period I want in the interval_timer# variable and then checks at appropriate times to see if the timer has been decremented down to zero by the interrupt routine. (This is in addition to the sysclock). For example, in the fast loop, code is testing one interval timer to determine if I can activate the sensor again.

I only have come across uses for about 5 interval timers so it is not a lot of additional overhead within the interrupt handler. I set the intervals up to use only a single byte so they are atomic, but if I read them from hi byte to lo byte I should be able to utilize multiple byte counters also without using the disable interrupt method of synchronization.