We are see lots of delays in the exicution of our teleopt code. Useing an elapsed time vi. We know we are getting anywhere from the expected 50ms with spikes all the way up to 1800 ms. The average is probably about 300ms.
We also know that the teleopt code is self is running pretty fast, just a few ms.
We also know that when we turn off vision processing from robot main, the average goes up to about 55ms.
We have tried all of the suggestions we could find here on what to change with the camera savings. (Image size, FPS and compression)
We have all the latest updates to the CRio and DS.
I’m pretty sure teh delay must be in the DS clling the teleopt loop, but am unsure how to fix the problem.
We experienced this delay in Kansas City. Had never seen it before reaching the field there. The delay would even persist AFTER our matches, while testing tethered in the pit or practice field. Re-imaging the cRio would solve it, but I believe it returned at least once again and required the same fix. One theory offered to us was that the field shut-down of the cRio left the memory full of vision data that was never cleared and somehow caused this delay. Don’t know if this is correct, but abandoning our vision entirely did seem to prevent it from coming back. We would like to solve the problem while still getting our vision back.
Between this and the widespread comm problems we and others have experienced on the field, it would appear that negotiating the pitfalls in the control and communication system are as important a part of the challenge this year as the game itself. You can’t always prevent them even with thorough testing, but make sure you get to the field on Thursday so you can check your system with the FMS, and make sure you use proper power-up procedures at your matches.
By any chance are you using CAN? Our software, written in C++, was seeing large lags and lack of responsiveness due to errors the Jaguars were returning on occasion back up the CAN bus, through the 2CAN and then being displayed by printfs. When we patched the WPI code to ignore those errors the lag went away. I can’t remember the error the Jaguar was returning but it was one we didn’t care about since we send the commands to them every time through the loop.
I don’t know if a similar thing is happening in Labview. If you are using PWM in either environment, ignore all this.
The symptoms don’t sound identical, but the first thing I’d do is to turn off the error logging on the cRIO. As mentioned, C++ and Java teams have in some conditions been seeing delays due to excessive numbers of errors causing lots of printf/logging. Similarly, the LV code has globals for sending uncaught errors to the driver station and logging them.
To turn off the global, search for a post from Doug Norman and follow the directions. Also, if you’d rather not turn off vision, it is possible to speed up the dashboard vision quite a bit by building your own custom one from the template. One useful performance enhancement is to right click on the image control>>Visible>>Image Information. This should hide the textual information below the display. A combination of this and the chart were causing expensive draws on the classmate. This was pretty easy to locate using the performance tools.
As for the vision memory not clearing, I’d like to hear the explanation for that which doesn’t use Star Trek technobabble. In short, the vision adds some overhead, but can be used just fine on a controller provided everything else is pretty well behaved. I’d encourage you to do your own debugging and not buy into explanations which require RAM to misbehave in a very special way, just on your robot.
I saw something about a separate loop, but I’m not sure what that was referring to. If you have a neverending while loop running somewhere, try inserting a short delay inside it so that it doesn’t monopolize the cRIO’s CPU.