I haven't had the time to play around with some of my timing ideas ... this being the final week and all.
I'd like to ask some of the more advanced Labview developers and cRIO gurus for some advice and/or rules-of-thumb regarding the elapsed time for some common tasks.
Our team's software has most of it's "decision making" in the teleop/auton enabled VI's, running at the frequency of the DS packets ... 20 msec/cycle.
Underneath all that "main-while-loop" stuff, we've got our other
primaryVI's ...
One reads a command global and sets the drive-motor speeds.
One reads all the nav sensors (accel's, gyro, encoders) and writes to a global.
One reads a command global and sets the camera-gimbal servos
One is for Vision
In the teleop/auton modes, we look at the Nav data and the joystick inputs, and set the commands to the motor-globals.
In each of my free-running VI's (other than vision), I have a WAIT = 10 msec so that they'll run at most twice for each DS-cycle. But I'd like to adjust that arbitrary number with one that is based on the expected response times for these sorts of actions.
I wish I had the time during our recent build-sessions to play around with some timers and find out for myself, but alas, today my hands were on power tools much more than the keyboard
Do y'all put these WAIT's into your standard parallel VI's? or do you let them free-run as fast as they can? If/when you let them free-run, what sorts of elapsed times would you expect from them?
Thanks!