Quote:
Originally Posted by simpsonboy77
I'm not certain about labview's implementation but in C++ you can request the system clock for its time since it powered up. This has a resolution of 1ms. Before you do your dt calculation you can figure out how much time has passed by asking for the current system time and subtracting the last time you asked for the time. I think this is much better than any loop that you state how long it takes. This will account for anything that changes the cpu load, perhaps vision code or the garbage collector.
new_time = current_sys_time
do pid calculations with dt = new_time - old_time
old_time = new_time
|
The same thing exists in LabVIEW. It's the 'Tick Count' block with the same 1ms accuracy.
I *believe* there's a more accurate Tick Count in the RT section that can measure from other clock sources. BUT, I haven't ever needed more than ms accuracy.
In an ideal realtime world there is no garbage collector, all of the RAM is statically allocated in blocks and never dynamically allocated. In an ideal realtime world, people also don't try to run vision on the same processor that runs hard RT control loops without actual RT control loops, when the RT task is already using the majority of the CPU, and expect the non-RT control loop to be reasonably consistent.