# Wait Function

We were wondering how the wait function works (the clock timer in periodic task). Initially we thought that the clock time is the minimum time that the while loop will wait to execute the next cycle (it will either be this time or the actual time it takes to execute all the code in the while loop - which ever is longer). However, using the elapsed time vi we see that even though we set the wait to 10ms the actual time shown in elapsed time the time is less than this (see screenshot it shows 2ms).

Can anyone clarify what is going on here? Thanks.

Your definition of how things work is correct. The problem is with noise in the measurement.

The loop iteration in LV finishes when its contents finish. Thus it is gated by the slowest path through the loop. Placing a wait ms in the loop in parallel is a somewhat reasonable way to declare that the loop should take at least 10ms to execute, and it is pretty typical that it will result in a ~10ms or ~100Hz loop provided nothing else slows it down.

Looking more closely at the loop iteration, there are a number of parallel tasks inside the loop …

1. False wired to loop continue
2. 10 wired to wait ms
3. nothing wired to the Elapsed Time
4. DIO Get
5. DIO Get
6. DIO Get

Then there is a bit of code dependent upon the outputs of the DIO Gets and the global update.

I labeled the initial tasks 1 through 6, but that order isn’t fixed, or at least it isn’t guaranteed to be. One explanation of your measurement is that subsequent loop contents are scheduled in different orders. On one loop call, Elapsed time is called last, and on the next loop call it is called first. On average, I suspect the timing is fine. Looking at a small sample like the single number of elapsed times, you’ll see some, and apparently quite a bit of variation.

Also, you are using an early version of the Elapsed Times VI that returns a single measurement, with no statistical data to improve or represent the quality of the time measurement. I apologize for putting it out there like that.

In general, LV users don’t worry that much about the parallelism of small things like this, but it can introduce noise in your timing. Since timing can be very important, LabVIEW Real-Time introduced a new loop structure with a more precise definition of loop time. It is in the subPalette at the upper right and is called the timed loop. It is more complex to use, but far more capable and precise about timing. The terminals on the inside right of the loop even include measurements about when the loop was supposed to start versus when it did start, when it was supposed to end versus when it did end, etc. The LV framework doesn’t actually use any RT (real-time) elements in order to keep things simple.

We mentioned in another forum thread that we’ll be doing a presentation in St Louis about the architecture, and this includes improvements to it. Perhaps next year, we should offer a realtime version of the framework?

To verify this explanation works for what you are seeing, run the code and you should see a blur of numbers that are typically closer to 10. You can add statistics to the Elapsed Times. You can put the code inside the loop in a sequence structure and call the Elapsed time before or after the contents of the loop. You can also modify Elapsed time to show a graph of the delays.

Feel free to post additional results if things seem fishy.

Here are some examples to help illuminate what Wait really does and how it and it’s cousins (Wait Until Next Multiple & Timed loop) affect loop timing. Some of the descriptions are probably over simplified.
TimingIsEverything

Great article. And great title.

Would this be an example?

a single Timed Loop set for 1ms and a higher priority will lockout your driver controls or make response very erratic, because the Teleop loop will never get time on the CPU.

I assume the above is true only if the code in the 1ms loop takes some significant percentage of 1ms to execute? In other words, the context switching overhead is nowhere near 1ms, is it?

**

It is nowhere near 1ms. And the overhead of the loop is relatively small. Of course it very much depends on what you put in it, and whether you configure it to try to regain the schedule or slip it.

If you want to look at it under a microscope, you modify those sample loops to call into the RT Trace toolkit and that will let you see all the details of the scheduling. I’ve attached one of the example log screenshots. I would attach a typical 20ms teleop execution, but again, no cRIO at home.

None of your posts really say why a loop with a 10ms delay would run at 500hz consistently, just why it may be noisy. I think the original poster meant that it was pretty consistent with its 2ms time.

The probably may be that you didn’t wire up a string to the “name” part of the vi, and you’re calling it twice within the same vi. Not wiring up a name is fine for teleop, where there’s only one loop, but you probably want to do it in periodic tasks, where there may be multiple loops you want to measure.

Doh. That would explain a lot. I normally wire the string, but it was built so that when unwired, it will use the VI name. Are you calling the Elapsed time multiple times with the same string – unwired from the same VI? Because honestly, I wasn’t liking my explanation. I wouldn’t expect more than about 2ms of noise jitter.