We are experiencing some issues with our code. When vision is enabled, the entire system slows up, including all of our code. It happens in teleop and auton, and the problem still exists in the default code.
We put a function in to measure the time between repetitions of the main loop. When vision is off (but camera plugged in), the time is around 40 ms. We still get live image feed. Once we turn vision on, the lag jumps to between 200 and 400 ms.
We have tried disabling all the globals in the main loop (except enable vision), deleting the tracking chart, and running it in every mode, and on different computers. We are running LabView. We have installed the recent LabView updates, and the cRio update.
Not sure, but this could be related to a general lag problem that we’ve been discussing here. Any help is greatly appreciated, as we’re encountering a problem similar (though we’re not sure if it’s directly related to vision).
(I’m on the same team as rath358, whose name is Max, btw.)
We figured out how to optimize it by changing the image taken to grayscale (U8). This reduced the cycle time for the while loop in the Robot Main from 300 to about 180ms. This is better, but still not very reasonable. 100ms at least would be a better goal.
It also significantly lowered the cycle time of the while loop when we changed the image resolution to 160x120. However, making the images this size is not beneficial at all because the ellipses are hard to read at that size.
Sooo while our robot’s electrical mounting is taking place (therefore I cannot test/debug the cycle time), I’m studying the Vision VI in the bottom Flat Sequence Structure. There is a comment in inside its Camera: Update Status VI, written by the original programmers of the default code (not to be confused with the standard .lvlib library code) . It says:
This subVI is reentrant so that it will work correctly if used in multpile locations. You may want to change the VI Property to disable reentrancy if debugging.
I don’t believe the Camera: Update Status VI needs to be implemented anywhere else in the code really.
So, I’m planning on taking the shift register out and feed the “Enable Vision” global into the simple case structures inside the original shift register. It’s one less step to go through, and could reduce the time spent in each cycle. I’m not sure if it will be a significant amount of time, but we can still shave the cycle time down in as many places as possible to reach our less-than-ideal but realistic goal of 100ms.
I think I might post this into the other thread you linked as well, pSYeNCe.
WPILib includes source for a reason, and tinkering with it is of course an OK thing to do, but if your goal is to speed things up, you may want to consider some other approaches. Wire length, tunnels, and even shift registers are not really performance problems in general.
First, you may want to build a simple test framework that would probably be based on one of the camera examples, or perhaps on the default code. Add some panel indicators or some code with outputs you can probe, and then run through different settings. You may want to guide your exploration a bit using the white paper on the NI.com/FIRST site.
One approach is to use the small camera settings, and simply make the vision work better by improving the contrast in the image and figuring out how to limit the false positives.
The other approach is to start with the medium image size and see what parameters allow you to simply get the images fast enough – hint, raise the compression to about 20 or 25.
I have done a little more testing today, but people have been busy putting the cRIO on the practice bot. a few thoughts/ questions:
-when vision processing is disabled, the dashboard image does not update at all. Is this normal?
-is the main loop, with teleop and autonomous code and such supposed to run quickly while image processing is enabled?
Again, we are running the updated version of LabVIEW, and have done the recent cRIO imaging. I am running code via ethernet from a single laptop (not the Classmate, it is at a scrimmage). We are using the remote dashboard utility.
I am including a few screenshots. The first is with vision enabled and the local dashboard on. The second is with vision processing disabled and the dashboard running. the third is with vision enabled after I exited out of the dashboard, and the fourth is with vision disabled and the dashboard closed. The graphs are gathered using this snippet of code, excetpt the seconds to respond, which is already in the default project.please note that the time to process image, bottom right, did not change with vision processing off.
My last post appears to have been eaten by the forum (due to attachments?), so here is a brief re-post.
A couple questions: Is the dashboard supposed to show live feed from the camera while vision is disabled? Ours doesn’t…
Quickly, I have been running from a single laptop, through ethernet cables, to the cRIO, and using the local dashboard.
Has there been a dashboard update? How do I check what version I have?
I THINK that the main loop, with teleop code and such, is supposed to run fairly quickly, and since it is not related to the vision loop, should be mostly unaffected by having image processing on, as is suggested by the comment in the robot main block diagram. Is this correct?
We have tried compression (5, 20, 25) and taking out all of the teleop and autonomous code from a default project, but the main loop still takes around 200ms with vision processing on. The periodic tasks also appear affected, but not as much.
Could these issues have anything to do with the recent update?
Please post your project. Zip up your folder and attach if possible. I suspect the main loop is being slowed due to something else, but it is always easiest to debug code rather than code descriptions.
First of all, I would like to thank you very much for all the help that you have not only given me, but to just about every person with a software problem I see here, Greg. Your help and dedication are really appreciated.
I will upload a file tomorrow when I have access to the team computers, but it is worth noting that this problem is not specific to my code, and also appears in a freshly created default project. It appears to be similar to this and some of the problems mentioned here.
It seems to me that this issue is being seen by several different teams. I can zip up code with Max and Ana tomorrow, but from our student’s testing the lag issue is found with the default Labview robot project. The only change that was made for testing was to add a teleop frame timer.
Since I don’t have a cRIO at home, I can only look at the code for now. The only thing that stands out is that the camera is set to only 4 fps. This shouldn’t cause lag, but will cause the vision loop to run slowly.
One thing that would be good to try is to go to the I/O tab on the DS, and set the configuration to Enhanced. Also, make sure that you do your timings with the minimal panels open.
Changing the driver station configuration to advanced does not appear to have an effect. Changing Frames pper second does not have any visible effect. With only the robot main frontpanel open, the main loop is running at 200 ms with a target, 100 without, and 40 without vision. Oh well.
The cRio’s going to slow down when it’s doing ellipse detection. Period. It is inherently a very processor-intensive operation, and the cRIO’s CPU isn’t that fast.
That said, if you’re worried about it hurting performance of your main loop, can you have it only process images when specified conditions are true? We were able to improve the responsiveness of our drivetrain and arm a lot just by only turning on the tracking system when it was specifically needed(of course, if you’re doing any sort of predictive tracking, this won’t work)
nathanww: Yes, we can shut off image processing. It is very easy to do so, but that is not the problem. The default code has vision running in parallel in order to prevent the main loop slowing down, but it isn’t working anymore.
Last night, I spent part of my time trying to track down what is slowing up the main loop. Disabling the IMAQ find ellipses VI, for example, sped the main loop up some, but not to what it should be. Disabling other VIs within the circle finding code had similar effect. This leads me to conclude that the slowdown is not being caused by one particular thing, but by the overall load of all of the vision processing stuff.
Also, using a fresh default project, live video feed DID work with vision processing off, and the main loop code seemed to run at a normal rate.
Also, I got an error thing whenever I tried to run code. It was probably the laptop being funky, but I will post it When I get back to school.