default behavior for thread overrun

*What happens by default in each of the three supported languages if a thread overruns (is not finished when next scheduled)?

**1) it is allowed to finish and the thread is added to the scheduling queue to begin execution as soon as the existing instance completes

  1. it is allowed to finish and the new scheduling event is dropped

  2. it is terminated and the new event is started

  3. it is re-entered**

Is a unique error counter incremented for each of these possibilities?

If (1), how deep is the queue?

If (4), is there a limit to the re-entry level?

Labview has its own worker queue system so I’m not sure how work is scheduled / where scheduling points are placed.

With the other two languages, I believe (pretty certain for C++, slightly less so for Java) that threads are mapped onto VxWorks threads (called tasks). From what I’ve read, VxWorks defaults to pure priority scheduling with round robin turned off, so the highest priority task that is ready to run at any given point is allowed to do so. So I guess I’m not sure what you mean by “overrun.”

Since a task is ready to run unless it’s either suspended, blocked on some resource, or delayed (by sleep() call, etc), then task ready state is only changed on either a system call or hardware interrupt (which would include a hardware timer for doing task sleep). System calls can only be triggered by the current running task, and control is shifted back to the kernel so that it may execute the system call. If it’s a hardware interrupt, the processor will shift into an interrupt handler, which is part of the OS kernel, on the next instruction cycle.

Once the processor control has shifted to the kernel, the kernel saves off the register file state from the user task. At this point, it can either perform its necessary tasks and switch back to the same user task, or it can switch to a different task if the operation has caused a higher priority task to be ready to run. When the kernel switches back to a user task, it restores its register state and resumes from the point in the program where it had previously been.

So I guess the answer is (1)? If you let “allowed to finish” include “complete the currently executing instruction” in the case of a hardware interrupt event.

Disclaimer: this assumes that VxWorks and the PowerPC processor work roughly as taught in my embedded systems class… Otherwise, forgive my naivete

The behavior depends on what element of the language we are discussing. The regular framework really doesn’t expose the realtime features of LV. I think the best answer to your question regarding LV is to explain what the timed loop can do and what it does by default.

In LV RT, the periodic code is placed into a timed loop with the priority, period, and policy inputs either wired or configured. The loop waits until the start deadline, schedules the code, and makes available much of the desired data about the execution.

The left photo shows all loop parameters. The inside left edge of the loop has been grown to expose all of the data that the loop computes and provides to the code regarding how it is running and how the previous iteration ran. Most interesting is the Boolean that identifies if the last iteration finished late. The other values will allow the task to log or potentially adjust its operation. Additionally, the inner right edge of the loop allows the code to modify the next loop iteration’s priority, rate, or policy.

The policy has a number of values, shown in the center photo. The default policy is to Discard missed and Maintain phase. This is equivalent to number 2 in your list.

The third dialog shows other settings, with the policy controlled by the checkboxes in the lower right.

To accomplish #1 in your list, change the settings to not discard. #3 is accomplished by the code checking its own progress. There is no built-in termination as that isn’t worth the pain it causes, especially in a dataflow language. #4 could be accomplished with a different pattern where you determine the instances by having waiting instances and trigger them yourself from your timed loop. The loop semantics never mean that loop iterations overlap unless it is a ParallelFor loop.

Is a unique error counter incremented for each of these possibilities?
If (1), how deep is the queue?
If (4), is there a limit to the re-entry level?

It isn’t considered an error, but a Boolean data value with time info.
The LV execution queue size if BIG, like ~two billion big.
You determine the limit if you choose to use this pattern.

Greg McKaskle

Clipboard 3.png
Clipboard 2.png
Clipboard 1.png


Clipboard 3.png
Clipboard 2.png
Clipboard 1.png

I’m far from a VxWorks expert, but if the above is true, then from my experience if you wanted something like a task monitor in C++ that tracked task overrun and decided what to do when a task overran, you’d have to implement it yourself.

Can you explain what you mean by task overrun? The way I would interpret that doesn’t have any meaning in the context of a preemptive scheduler like VxWorks uses.

If you’re looking for a way for multiple tasks of the same priority to share the processor (in C++), the docs say you can enable round-robin scheduling by using taskLib’s kernelTimeSlice() function, which accepts the round-robin time-slice period as a parameter. Tasks always run if they have the highest task priority that is ready to run (as before), but after enabling round-robin, if you have multiple tasks at that priority, each task will be allowed to execute, in tern, until either its time slice has elapsed or the task blocks. You can can call taskDelay(NO_WAIT) if you want to place the current task at the end of the run queue without any delay.

Let’s say you have a periodic task which is supposed to run once every 10ms, but on the Nth iteration it does not complete in 10ms. That’s what I meant by an overrun. Various things could cause this, for example the code waiting for a resource, or being preempted too often by higher priority tasks, or taking an unusual execution path.

Ah, then from what I know, Todd is correct. Which would make any of your four original options possible, assuming you use round-robin scheduling. For (1) the queue size would be dependent on your implementation and available memory, and since VxWorks does not appear to define a hard maximum number of tasks (4) would be limited by memory.

I would expect that a knowledgeable programmer could make the code behave like any one of those options. I was wondering what the “default” behavior is, for folks who use the framework and don’t do anything too far out of the ordinary (like changing/tuning the scheduling algorithm).

If I had a requirement or an expectation* that a task should never overrun, I’d add runtime code to monitor and report it. I’d leave this code in place for all in-house testing and practice, and probably for competition too.

  • If there’s an overrun where I completely expected there would never be one, I’d want to know it. It could point to a coding or design error, or even a fundamental misunderstanding of how the system works.

I had wanted to try this for next year (we had some issues where our robot was hung up waiting for an input), but I didn’t know how effectively I could have code monitor other code without things running in parallel. I’m no expert about our control system, but I thought that only one thread runs a time.

In my workplace I’ve worked with VxWorks solutions where a task is spawned to act as a watchdog of other tasks, both for reporting purposes during development, and sometimes to ensure that a fault is raised in critical subsystems where the operator must be made aware of an equipment failure as soon as possible. This is usually done with some relatively trivial mutex trickery and a registry for how often every task should complete.

Like mentioned before, the VxWorks scheduler by default (the only iteration of it I’ve worked with, though I’m sure there are others) is a strictly priority driven system, where if task 1 has a higher priority than task 2, and is not asleep, then task 1 is running and task 2 is waiting, with no concept of ‘high level’ scheduling.

I think java has some more advanced java library scheduling implementations, but I’m not familiar with any of them beyond basic threads, and they may not all be supported on our hardware.

Labview, like Greg mentioned, is its own whole separate bear, but I think that a system like you’re talking about could be implemented in any of the three, with varying amounts of effort.

Java looks like it uses the standard java.util.Timer/TimerTask. According to the docs:

The C++ version of WPIlib includes the Notifier class which seems to fill the periodic task niche. The code looks like it follows (1), however the task queue is shared globally among all instances of Notifier, so not only does a long-running task block successive instances of itself, it will block other periodic tasks as well. As you say, a knowledgeable programmer could modify this for better behavior with a long-running task by spawning a new task in the handler to complete the periodic job.

Yes, only one thread runs at a time. But you don’t need a multicore processor to do task monitoring.

Check out this thread:

http://www.chiefdelphi.com/forums/showthread.php?p=1181110#post1181110