I know that timing is a tough subject, but I’m trying to figure out why some of my assumptions on real time timing versus normal LabView timing seem to be wrong. I read and re-read the paper from 358, which pointed alternative methods, but didn’t really help. We are trying to get more repeatable and faster control, while keeping the CPU usage down.
OK, first off, we are having some issues with CPU usage on our robot. With normal waits, we were hovering between 80 and 100% CPU usage. We turned off debug code and enabled in-lining on some sub-vi’s that were called a lot, and our CPU usage dropped to ~50%, with a 100% spike every second or so.
Our fastest timed loop is our driving code in periodic. It has a 50 ms wait and contains a standard PID vi and an unaltered cartesian holonomic drive vi. We use CAN with a 2CAN, and found that shorter waits only hurt our CPU usage. We still get occasional warnings about CAN Jaguar timeouts.
OK, here’s where the assumptions come in. I thought that the real time pallet used timing circuits on the FPGA, essentially offloading the timing from the cRIO CPU, so we tried to substitute the real time waits for the standard LabView waits. Our CPU usage jumped back to 80-100%. We took those out, and tried the wait until next multiple (the metronome from the standard timing), and that made the robot very laggy, pegging the CPU usage between 95-100%.
We are considering putting the PID and our gyro that can be read much faster than 20 Hz, into a faster loop away from anything that has to talk over CAN. If we do that, would we benefit from using the real time timing? Would there be a reason to use the real time PID over the standard one?
Does the real time pallet use the cRIO’s FPGA like I assumed, or does it just increase the workload for the cRio CPU? Does what we are seeing make sense? Should we just avoid thinking about the real time pallet of vi’s?
I know that I ask lots of questions here. Hopefully what shakes out can help many others.