2010 FRC LabVIEW framework architecture questions

1) What does the “execute” case of Teleop do if it overruns?

  • ignore (drop) new DS packet and keep executing the current iteration?

  • queue the new packet for immediate execution as soon as the current iteration finishes?

  • abort and re-start with new packet data?

  • re-enter?

  • something else?

Is an error message given?

2) same questions, but for Autonomous Iterative

3) Are the “Periodic Tasks” preemptive and rate monotonic? ie if a 100ms task takes 60ms to run, that does not interfere with a 40 ms task, right? How are overruns handled in the Periodic Tasks? Can a Periodic task be faster than 20ms? If so, I assume it can interrupt the “Teleop execute” code, right?

~

I’ve marked my responses with **s.

1) What does the “execute” case of Teleop do if it overruns?

  • ignore (drop) new DS packet and keep executing the current iteration?
    ** By default, this is the behavior. Once finished, it will wait for the next packet arrival.

Is an error message given?
** Not unless the code lasts long enough to get a watchdog error.

2) same questions, but for Autonomous Iterative
** Same answer, in fact the same code schedules both.

3) Are the “Periodic Tasks” preemptive and rate monotonic? ie if a 100ms task takes 60ms to run, that does not interfere with a 40 ms task, right? How are overruns handled in the Periodic Tasks? Can a Periodic task be faster than 20ms? If so, I assume it can interrupt the “Teleop execute” code, right?

** Since the loops are scheduled to a pool of OS threads, that is most likely implementation. The normal loops will have the same scheduling activity as the same-priority threads within the OS. Ditto, since the framework defaults to having the periodic, vision, and teleop tasks run at the same priority, the scheduling is the same as same priority OS threads.

Greg McKaskle

Ouch. Is there any way that the user could add their own code to check for overrun? eg does the code that decides to drop the packet reside in the LabVIEW framework somewhere (where the user could test for this occurrence), or is this buried in the RTOS (and the framework never sees it)? Alternatively, is there a system clock in the cRIO that could be read at the beginning and end of the user code in the Teleop execute case?

~

Hi Greg,

Take a look at this post:

What would be the proper way to set this up? Set up a “Periodic Task” running at 5ms (200Hz) to grab accelerometer data? I wonder how much code the VI would generate for this seemingly simple task? This task would obviously have to preempt Teleop execute in order to run properly.

I would think that the 5ms task would contain a ring buffer to hold the 10 samples, and that’s all it would do (to keep it short).

Then, a 20ms task would grab a copy of the ring buffer and do the averaging and other calculations. Instead of a separate 20ms Periodic Task, should this be coded directly in Teleop execute?

What special precautions does the LabVIEW programmer need to take, to assure that the 5ms task is not writing data to the ring buffer while the 20ms task is attempting to read it? Or is that all handled automagically by the LabVIEW compiler and the RTOS?

~

Ouch. Is there any way that the user could add their own code to check for overrun? eg does the code that decides to drop the packet reside in the LabVIEW framework somewhere (where the user could test for this occurrence), or is this buried in the RTOS (and the framework never sees it)? Alternatively, is there a system clock in the cRIO that could be read at the beginning and end of the user code in the Teleop execute case?

Absolutely. You can read the millisecond clock simply by dropping the Tick Count block. It is used for timing the Vision code and elsewhere in the framework. You can read it at the beginning and end of Teleop, or you can read it each entry, etc. You can also make use of the Match Info.Elapsed Seconds which has millisecond resolution. It comes from the same clock, and you can use it at the beginning of Teleop to identify a missed packet. I’ve made notes about how each of the choices listed would impact the robot if they were the chosen method.

Greg McKaskle

What would be the proper way to set this up? Set up a “Periodic Task” running at 5ms (200Hz) to grab accelerometer data? I wonder how much code the VI would generate for this seemingly simple task? This task would obviously have to preempt Teleop execute in order to run properly.

I would think that the 5ms task would contain a ring buffer to hold the 10 samples, and that’s all it would do (to keep it short).

Then, a 20ms task would grab a copy of the ring buffer and do the averaging and other calculations. Instead of a separate 20ms Periodic Task, should this be coded directly in Teleop execute?

What special precautions does the LabVIEW programmer need to take, to assure that the 5ms task is not writing data to the ring buffer while the 20ms task is attempting to read it? Or is that all handled automagically by the LabVIEW compiler and the RTOS?

The size of the loop to read the accelerometer data is easily measured as is the time. I suspect it will take between one and two ms to run, and as with camera tasks, it will not have too much trouble preempting the teleop code and vice versa. Since the timing of the accelerometer samples may be important, it may be a good idea to use a timed loop and perhaps to elevate the priority of the loop to cut down on jitter.

If you really want to read the accelerometer with the best timing, you set up a DMA channel to implement the ring or buffer. You can then run your acquisition on multiple channels at 5kHz or so. The RT loop can then do the processing on the buffer.

To the other question, value updates and data transfers are atomic, and that includes arrays, clusters, etc. I wouldn’t necessarily say it is automagic, but it is certainly nice, especially when you like correct answers.

Greg McKaskle

If arrays and clusters are atomic, wouldn’t that mean if a 5ms task wants to write a single element to a large array while a 100ms task is grabbing a local copy of that same array, the 5ms task has to wait for the 100ms task to finish copying the entire array?

~

I don’t understand the “vice versa” part. Are you saying that the teleop code can preempt the 5ms periodic task that reads the accelerometer? Under what conditions would this be necessary, and how would that affect the 5ms tasks’ ability to complete on schedule?

~

If arrays and clusters are atomic, wouldn’t that mean if a 5ms task wants to write a single element to a large array while a 100ms task is grabbing a local copy of that same array, the 5ms task has to wait for the 100ms task to finish copying the entire array?

Yes. In a parallel access, each will acquire a mutual exclusion object, or a mutex, and until they successfully gain the mutex, they cannot touch the data. This could be because someone else is writing the data, or because someone else is reading the data they want to write. This is the same mutex you’d better put around your accesses in C/C++ code or you’ll corrupt data. LV just does it for you.

I don’t understand the “vice versa” part. Are you saying that the teleop code can preempt the 5ms periodic task that reads the accelerometer? Under what conditions would this be necessary, and how would that affect the 5ms tasks’ ability to complete on schedule?

If the scheduler determines that is the best thing to do given the priorities and resource needs, the scheduler can halt any loop or any code in any task and schedule another. A good example of where a short, task may be suspended, even if for a lower priority task is when the high priority task blocks waiting for asynchronous data or blocks waiting for something like an event or timer. LV generated code may contain strategically placed yields so that the scheduler is encouraged to run, but except for mutex’ing for data correctness, it doesn’t interfere with the scheduler’s job of allocating CPU and other resources.

Note that the timed loop structure is a special case in all of this since internally it implements its own scheduling and determines what to expose to the system scheduler.

Greg McKaskle