Hello cheif delphi community,
So through testing, I have found that the 3 step circle finding script that I made in vision assistant and implemented into the vision processing vi is causing major lag in the other robot systems.
When driving (or operating any other feature) the motors will sporadically continue driving according to the last command 1-3 seconds after the input from the controller has stopped. Specifically, we would pulse the joystick that drives the driving motors, every few pulses, the motors would continue to drive 1-3 seconds after the pulse finished.
I then put the cricle finding script in a disable structure and re-tryed the test: NO LAG. So based on this testing, there is something about the circle processing script I’m using, or its implementation that is causing this lag.
Does anybody have any suggestinos to fix this problem/is there something glaringly obvious I am doing wrong that is causing this problem?
Attached to this post are the vision processing vi and the finding circles script (in its own vi called by vision processing) findCirclePink.vi (13.6 KB)
This sounds like the vision loop is staving other processes of cRIO resources. Use a millisecond wait in the vision loop, specifically in the enabled case. (I typically use between 1-50ms, but that depends on the complexity of the vision code)
PS: After posting, I actually looked that your VI. I am sure it is consuming most of the cRIO’s processor, and starving other loops.
There should already be a 100 ms wait timer in the vision processing vi loop. Back when I put it in, it helped a little with the lag, but it is still a major problem, is their any particular reason why you think it is starving other processes, is it the actual steps in my script?
I suspect it is that your script takes quite awhile to execute. And putting a delay in parallel will not help if the call is into a DLL marching through pixels. Have you timed it? You may be able to lower the priority of the vision code so that the tele will postpone the vision stuff. You may also need a more CPU efficient way to process the image.
In the code that you posted, the delay was only in the (vision) disabled case. When vision is enabled, that loop is free running. Most of the advanced vision VIs use external code to do their work, but most are in-line processes. (The subVI waits for the processing to complete before returning) Any wait in sequence with the code will slow down that loop, freeing up the processor. In my opinion, the vision loop as a lower priority than actually driving the robot. I didn’t mention setting VI priorities last time to keep things simple. Setting the vision VI to a lower priority may help, but other processes may still be impacted. (Other low priority processes, that you may want to use reliably.) I would typically use both a wait primitive, AND priority settings to achieve the desired balance.
I would also take a second look at your image processing, although I didn’t see anything too resource intensive there. Maybe you could try using a lower resolution image. While making some videos, we found that even the lowest resolution setting worked fairly well, while greatly reducing processing.
PS: Thanks Greg for reminding me about priorities, and the great march through the pixels.
The wait for next image will return immediately if the time to process is larger than the time to time to acquire a new image. The first thing I’d do is add timers to the loop to see how often it is running and for how long. I think we’ll find that the loop never pauses because there is always another image to process.