*This thread is a continuation of a discussion which began here.
I’ll post a couple of excerpts (these excerpts are not replies to each other, just pertinent info):
The periodic rate can be controlled either by a fixed value or follow the driver station. If it is set to follow the driver station, which is the default, then the period time will vary because of being a windows app. If you set the period to a specific value the cRIO will maintain a very precise periodic rate. (Hugh Meyer)
At competition the DS packets are still sent by the DS. FMS only orders the DS to change modes, but the DS has to send that command to the robot.
Any slow down in the DS PC will delay packets.
One of the checks for slow or lost packets is the CPU utilization on the DS.
(If in the Classmate Driver acct. use: CTRL-Shift-Esc -> Performance)
A saturated netbook can be a cause of lost and delayed packets. I see that pretty often. (Mark McLeod)
We use C++. There is a configuration variable that sets the periodic rate. If it is zero then the rate will follow the driver station. If it is not zero then the value entered will be the periodic rate. I tell our programmers to ALWAYS set the rate. We have not settled on a value for this year yet. (Hugh Meyer)
What does your team do? In LabVIEW, would the equivalent be to leave TeleOp completely blank and put all your code in Periodic Tasks (or would that break something)? Has anyone done this in Java?
For LabVIEW, putting everything in Periodic Tasks and processing at a faster rate wouldn’t change anything. We’d just be reprocessing the same old packets until a new one came along in it’s own good time.
Teleop is just a slave task. It gets called every time there is a new packet available to process. If the packets came faster, then Teleop would just get called faster.
The point wasn’t to process at a faster rate, it was to process at a more consistent rate. Hugh says his team has had success doing this. I’m wondering if it wouldn’t be better to stay with the event-based processing, and read the elapsed time since the previous execution cycle, and make sure to use that elapsed time wherever appropriate instead of assuming a fixed elapsed time of 20ms.
Oh, I’d never use the nominal Teleop 20ms as a timing device. Only if the event/action doesn’t make sense without a new driver order. The 20ms certainly isn’t guaranteed, and for many teams isn’t even close.
Anything I want to do based on a period, I do in Periodic Tasks.
If it’s a time critical task, then I’ll perform calculations based on a system time check.
If it’s a really critical task, then I’ll use a Timed Structure loop.
As you say, doing a system time check and calculation would work in Teleop, but the response action is still going to be occurring at wacky time intervals.
You’ve seen this Timing is Everything whitepaper before. I suppose I should add a watch on Teleop under different programming/cRIO/DS PC conditions that all contribute to really sloppy times.
We recently run into this issue as well. This is our first year using Jaguars on CAN bus. Apparently, a CAN bus command takes quite some time for a round trip. We are using 7 CANJaguars this year and we have implemented our own Cooperative Multi-Task Robot loop instead of using either the SimpleRobot or IterativeRobot templates. But it is based on the IterativeRobot class so we are also using the “20 msec” default DS loop time. The problem with this is that each loop involves updating each Jaguar. Updating each Jaguar not only involves setting the speed (the Set() method) but also possibly the GetPosition() or GetSpeed() methods if the Jaguar is involved in some sort of PID control. So the number of CAN messages per loop could be substantial. We found out that it consistently exceeds 50 msec for our loop. So we decided to do a SetPeriod of 100 msec. So effectively, we are hammering the CAN bus less frequently. We are going to add some more diagnostic code to collect info such as average number of CAN messages sent per loop and average time spent in each loop for sending CAN commands. We are experimenting with the implementation of a bandwidth limiting GetSpeed() method for the CANJaguar class. This basically checks the timestamp of the last call to SetSpeed and if the elapsed time between calls is shorter than a threshold, it will return the last speed value instead of sending another CAN command to get a new speed value. We probably don’t have time this year but after the season ends, we would experiment with implementing an alternate CANJaguar class that is more parallel to the PWM system that no matter how often Set() is called, it will just update a variable remembering the new speed. Then a separate thread will be used to periodically send updates to the Jaguars. This will limit the bandwidth of the CAN traffic to a predictable level. Any comments on this approach?
Would updating a PID with a fast changing process variable be an example of an action that makes sense without a new driver order?
As you say, doing a system time check and calculation would work in Teleop, but the response action is still going to be occurring at wacky time intervals.
What are the pros and cons of responding to new DS commands immediately (wacky time intervals) vs having constant time intervals and therefore some delay in responding to DS commands? For FRC, are there any situations where it makes a big difference?
Yes, great resource, thanks for making that available, should be mandatory reading for all FRC programmers.
What’s the downside of “Wait Until Next Multiple” ? When would you ever want to use “Wait” instead?
I apologize if I’ve asked this question before, but I can’t find the answer in my notes: how does LabVIEW implement parallel tasks on the cRIO? Does it use vxWorks to assign each parallel task to its own thread, or does LabVIEW have its own built-in preemptive time-slicing scheduler?