Apologies for the length of this post. I have trimmed it significantly from its original form, and it is still much larger than I would like.
Quote:
Originally Posted by BradAMiller
...The past years high speed sensor-interrupt logic that required precise coding, hand optimization and lots of bugs has been replaced with dedicated hardware (FPGA).
|
I'm not certain what you intended to say here. Doesn't any hardware control task require "precise coding"? There wasn't any "hand optimization" in the interrupt code I wrote, nor was such optimization even an option given the C18 compiler most teams were using. I'm pretty sure you didn't mean that "lots of bugs" are a necessary part of interrupt logic. I'm also wondering how the PIC interrupt logic doesn't count as "dedicated hardware" just because it's on-chip.
In short...huh?
Quote:
|
When the library wants the number of ticks on a 1000 pulse/revolution optical encoder it just asks the FPGA for the value. Another example is A/D sampling that used to be done with tight loops waiting for the conversions to finish. Now sampling across 16 channels is done in hardware.
|
You're apparently mixing a couple of different things here, and neither of them seems relevant. Using Kevin Watson's encoder support library for the IFI control system, when the program wants the number of ticks it just reads the value. The default code indeed does busy-wait polling for A/D conversions, but lots of teams replaced that library with one that uses interrupts -- in short, sampling across as many channels as desired "in hardware".
Is your point that the FPGA makes it possible to support much higher rates of encoder ticks or A/D samples? A faster CPU would be able to do that itself, without FPGA assistance, using interrupts. So speed alone doesn't seem like a compelling argument for eschewing interrupts.
Quote:
|
...And all of this without requiring any of that high speed interrupt processing that's been so troublesome in the past. And this is just the counters...This is to help reduce many of the real-time bugs that have been at the root of so many issues in our programs in the past.
|
It sounds like you have had a very bad experience with interrupts. That's probably not just because they were interrupts, and more likely the result of badly designed or poorly implemented service routines.
I can honestly say that I have
never seen any problems in FRC robot code that I have traced to a "real-time bug". The one I thought I saw (in someone else's code) was eventually determined to be an EEPROM access contention issue, having nothing to do with interrupts, and was experimentally solved by
adding a "real-time" feature (i.e. a semaphore) to the program. On the contrary, the big interrupt-related issue I have seen many teams run across is due to the IFI default library's
not using "real-time" code for its PWM generator.
Just how are we going to be able to implement PID control -- or even simple speedometers -- without using "real-time" features anyway? That's a question I asked some time ago when it was made clear that the cRIO doesn't have interrupts, and it was never satisfactorily answered.
Quote:
|
The objects that represent each of the sensors are dynamically allocated. We have no way of knowing how many encoders, motors, or other things a team will put on a robot.
|
The reasoning behind this statement eludes me. What am I missing? How is the amount of hardware relevant? Does dynamic allocation (i.e. at run time) address it any better than static allocation (i.e. at compile time)?