Please, keep in mind that I am not a programmer. Please keep explanations as simple as possible, if you can, as that would really help me out. The fewer semicolons there are, the better off we all will be.
The folks on my team are curious if anyone has some sage advice to offer about how many encoders – and more importantly, how many interrupts – the current robot controller is capable of deciphering.
We have two encoders mounted to our drive train and would like to add a third to a mechanism that moves about as fast as the drive train. Are we going to blow something up?
Put simply, the programming mentor on our team gets nervous if we try to use more than 2 encoders. I’m not sure of their settings however. For a bot that requires alot of interrupt-free calculations to smoothly operate, I’ll take his advice.
Now for some technical/theoretical:
In college I took an embedded processors class. In one project we programmed our PIC (same line as the FRC bot) and tried to trash it with interrupts. The idea was to keep a TRIAC-driven lightbulb lit to at least 75% of its maximum intensity while using 2 rheostats to make interrupts to drive a calculated LED display. This was fun and it showed the practical balance between keeping the LEDs lit and the lightbulb lit. Practically, our conclusions were (from memory):
Maximum # of interrupts per second = ~20% * {# of clock cycles per second - 6% overhead]/# of binary instructions per interrupt]}.
Breaking it down, the # of binary instructions per interrupt we used averaged out to just over 200. Note that a single line of C-code can have a gigantic amount of binary (1-2 variables, one operand) instructions.
I got stuck with an 8-bit 8Mhz processor, and in our scenario I could do 1 interrupt roughly once every half-millisecond – or 2000 per second – and still keep both the lightbulb and LEDs lit. YMMV though.
We used three encoders last year (two for traction and another for lift), generating at most 4 thousand counts per second and we never had a problem (not even close). We did use a potentiometer, but read it using IFI’s slow Get_Analog_Value, not Kevin’s high sampling rate, high resolution ADC code.
You can use high count rates if your code is smart, for example:
never use floating point calculations
don’t send printf’s when you don’t need them (that is, only use them for debugging on the pits, never on the field)
optimize your Interrupt Handling Routine to be as short and quick as possible
This year we’ll be using two low count home-made encoders and a gyro (sampled at least 6400 times a second) and I’m not a bit worried.
We have used the gyro and high count encoders (3200 Hz and 4000 counts per second) in practice robots and, although the code was never tweaked, we did have some occasional red-light-of-death blinking.
I’ve run a robot with up to 10,000 counts per second based on the IFI standard default code and it wasn’t pushing the limits of the RC, however, it’s really dependent on what else is going on. That top speed was the free-wheeling count when the robot was up on blocks. Competition usage that year probably maxed out at 5000 interrupts per second.
The interrupt handler was minimal, maybe three or four assignments and gone.
Whatever you do you’ll need to measure the fast loop time lapse to know how saturated the PIC is.
With Kevin’s 2008 code and C18 3.10, and the gyro code running at 6400hz, we measured 3% utilization (800 us to run the telop function every 26.2 ms).
Serial data really took a long time, it seemed to take around 100 us per character.
I’ve had a robot with 6 encoders working fine (we later changed the design and used fewer in the final design). The main thing is to count the number of interrupts per second. Typically the limit is about 4000-5000 interrupts per second. This, of course, depends on how much you do in each interrupt service routine. Don’t forget to also count interrupts for each character printed, and for any timers.
A robot with 6 inch wheels, running at 12 ft/s and 256-count encoders in a 1:1 ratio with the wheel will generate about 2000 interrupts per second per encoder. Three of those is probably too much. The easiest solution would probably be to use a lower-count encoder, or to use an encoder divider like the one from Bane Bots
Ok, this is not exactly in the case of a robot, but it shows how far/fast an 18F8722 processor can go.
At work, we made a crab drive AGV to load parts into various production machines automatically. We used stepper motors to drive the robot due to their inability to run away and ability to be controlled with swiss-watch precision. While running, the 18f output two completely seperate motor command pulse trains to two motors.
The ISR for each side included a table lookup (acceleration table) and some bit shifting, but made great use of the CCP registers (you won’t believe how much more you can do with those things once the chip isn’t hindered by IFI’s hardware configuration of it in the robot controller).
Anyhow, both of these ISRs run up to 12,500steps/second (that’s 25KHz interrupt together) and keeps the robot in communication with the plant coordination computer (there are multiple AGVs) over Zigbee at the same time. Oh yeah and the Local_Keypad_Service() loop (which runs on 26.2ms (yes I did that for old time’s sake)) which just watches for the overflow flag on the freerun timer feeding the CCPs runs at the same time. Beleive me… any glitches would be instantly noticed with these stepper motors… running at these high speeds they stall if you have the tiniest hiccup in your pulse timing.
As long as you’re smart about your code… you can make these things fly.
By the way, does the third thing you’re encoding go around infinitely many times or is it limited? You can get multi-turn potentiometers.
-q
p.s. If you have huge problems with high frequencies on your encoder interrupts, you can always get a microchip development board for one of their new 16-bit or 32-bit core processors which have built in programmable quadrature decoder modules that run up to several MHz/channel… and you don’t have to do a thing with them until you want a number. I believe they are DMA-compatible modules, so you can make the register (essentially) as big as you like.
At 20,000 interrupts per second, you would need a interrupt code path length of 500 instruction cycles or less (processor is capable of 10,000,000 instruction cycles/second). In almost all cases with the robot, the interrupt code path should be well under 500 cycles leaving upwards of half of the cycles for non-interrupt code.
Typically in MPLAB/IFI default code the interrupt code paths I’ve measured are <250 cycles total as long as context save/restore is managed well and closer to 500 for WPILib based projects - mostly due to the larger context save/restores needed for accepting anyone’s service routine. Probably the longest code path I’ve seen is in adc interrupt handlers becuase of array indexing and averaging which isn’t needed for most other device interrupt routines, but still the average is <300 instructions (~110 cyles per sample and a max of about 1800 cycles for processing 16 channels at sample set boundaries for an average around 215-220 cycles).
Which means, with tuning you’d be easily able to process in excess of 40,000 interrupts per second before you’d start max’ing out the processor.
The down side is, as the process runs more and more at interrupt level there is less and less time at user level and you can get the red light of death because the user code isn’t exchanging data in a timely manner with the master processor. You can get around this issue by doing the Getdata/Putdata at interrupt level tied to the system clock for example.
There are many things to consider when asking how many interrupts is too many. It takes some time for the processor to get into and out of interrupt context. The PIC is pretty fast, 5us or so I think. Then when you get into interrupt context, it takes some time to save the registers and restore them. This maybe adds another 5us, probably less. So if it takes 10us per interrupt and you generate 100,000 per second you are hosed. One assumes we want to do something in the ISR, lets say that you do 40us of work in the ISR. Now you have 50us total per interrupt so if you generate 20,000 per second you are hosed.
So consider how much time you spend in interrupt context and how many interrupts are generated per unit time. Then you have other considerations. How much of the total bandwidth can you spare? You can’t miss the deadlines sending messages back to the OI so add up the amount of time you spend in your slow routine plus the fast routine plus the max amount of time in ISRs during any 23ms period. If the WORST CASE numbers add up to more than 23ms, you have to change something. Make sense?
We have no troubles with encoders generating 4,000 interrupts per second and the serial port another 1200 or so per second. The only really complicated stuff going on in our normal code are a couple integer PID calculations.
Quick question for this thread, does a single quadrature encoder count as one irq or 2 irq’s ? My quick calculations show that a wheel speed of 8 feet/sec with a 256 pulse/rev encoder yields about 1000 interrupts per second or is it really 2000 interrupts per second because the encoder has two channels? The addition of a second encoder can now push the processor to 2000/4000 interrupts per second. So, if 2 irq’s per encoder, the processor will be operating nearly 4000 interrupts per second?:eek:
Quad encoders represent 1 interrupt, the phase b line is polled during the interrupt caused by a phase a transition. The phase b is used to determine direction but shouldn’t be tied to another interrupt line.
It can be implemented either way. Personally, I usually put an interrupt on only one channel and use that to count steps, and look at the other channel to determine the direction.
The number of interrupts you want to generate is dependent on the accuracy you design for. Any quadrature encoder you choose can be quadrupled in accuracy by discriminating programming.
An encoder rated to return 128 counts per revolution (cpr) assumes the use of an interrupt only on “output A,” and triggering an interrupt only on one edge, rising or falling, of output A, e.g., Digital Inputs 1 & 2.
You can double the effective resolution (a 128cpr encoder will give 256cpr) by alternating the interrupt trigger within the ISR to tick on both the rising and the falling edges of output A, a la Digital Inputs 3-6.
Further, you can quadruple your effective resolution (a 128cpr encoder will produce 512cpr) by also interrupting on “Output B” and triggering on both rising and falling edges.
BTW: Never use the combination of interrupting on both A & B, but only on one edge (rising or falling), since “ticks” would then vary in distance rather than be of constant length.
True. I forgot that PORTB<4:7> aren’t edge triggered, just change triggered.
This thread is rapidly disappearing into the tech rat-hole that the original posted requested it not. So backing up a couple levels. I’ve run IFI default code with ~8500 interrupts/sec without any issue. The code should be able to handle upwards of 20,000 interrupts/sec based upon the typical interrupt handlers I’ve seen. But…
There are tech gotchas that start coming into play above 10,000 interrupts per second with the standard IFI robot code framework. But those are best left for another thread. Maybe we should start a separate tech thread on the subject.