Quote:
Originally Posted by Tom Line
The PID is not what eats the CPU. The timed loop in LabVIEW actually forces the loop to a certain timing, rather than just waiting or sleeping it. To simplify it somewhat, if you set a timed loop to 5ms, all the other tasks will take a back seat to that single loop running every 5ms. It really hurts CPU.
I don't know if c++ has an equivalent.
|
In the WPI Lib PID code in PIDController.cpp (as of version 3487) opens a Notifier object with a default parameter of 50ms. It does like you describe forces the loop to a certain timing, and the reason for this is that this design does not wish to calculate uneven time slices within the PID algorithm. The result is good results from a functionality standpoint, but will have the same issue of context switching especially if one wishes to use a lower period. In c++ the programmer does not have to go in this direction, they can write their own PID
http://www.termstech.com/files/Archi...Controller.zip and avoid context switching all together. I had proposed this solution for the WPI library but there was not enough expressed interest in going in this direction. Really when you think of it... a machine can only execute one instruction at a time... when you have tasks/threads... the context switch overhead can really eat up cpu processing, and it doesn't have to be this way. I have spent several years on this issue documented here
http://www.termstech.com/articles/PID_Kalman.html