Worried about cRIO CPU usage when using vision

Hello,
Our team is looking into using a camera to help us with shooting baskets. We were worried that the cRIO might have some trouble with running the vision and normal programming. We usually have fairly complex programs with many PID loops and usually we use about 80% of the CPU after we spend time optimizing the code. We find that running the basic drive code and a basic vision code gives us about 95% usage. This is a large increase from the 50% from a basic code. When we probe data/view images in a front panel our CPU usage on cRIO jumps to about 100%. After several seconds of 100% CPU our robot will disable itself. This also happens if we change PID loop variables or we change some vision settings while we have vision processing running. Currently we have switches that disable our drive/PID loops/vision processing but this cannot be a permanent solution for us. We would like to know if there would be a way to lower our CPU usage. What are you guys seeing for CPU while using vision. We are a little worried about our cRIO as sometimes when it runs some new code after we have deployed it the CPU usage spikes to 100% and it loses connection. Should the cRIO be disabling/crashing when it is overworked. For our vision code we are using a modified version of the included rectangle tracking. We have gone back to the original version of the rectangle code and our problems are much worse. What could we be doing wrong? Has anybody been able to send vision processing to a laptop on the robot. Currently we are using 95% running fairly optimized versions of drive/vision/a single PID loop. We have done research and have found that several teams have removed the safety checks for motors. How would we do that? Is there also a way to see which vi’s are using the CPU the most?

We are currently running all of our code in Timed Loops, so the vision will always suffer first.

This way, since all of the important things (control loops mostly) MUST run at their given interval, the vision gets whatever is left. It also runs in a low priority thread.

The CPU will be pegged at 100%, but the important code should still run.

As for performance monitoring, here are your options:
-There is a block called “RT CPU usage” which tells you the CPU usage by thread priority. If you set priorities correctly, this will tell you what is using the most CPU time.
-There is a more precise performance monitoring tools in Tools->Profile, but you need to do a fairly precise test to get good data - You start the profiler, start the VI for a fairly short period of time, stop the VI, and stop the profiler. This tool will tell you execution time of each VI, which can be helpful for finding which VI’s are taking too long to execute.
-There used to be a nice System Monitor somewhere in 8.6, but I have not yet found it in 2011.

The system monitor functionality is now reached by right clicking on the RT Target in the Project window. Then go to Utilities and choose System Monitor.

This information is also on the Charts tab of the DS.

As for things using lots of CPU. Vision has a tendency of doing that. One thing to consider is how often or when the vision code needs to run. Do you run it when you think your robot is at a scoring location, or all the time? Do you run it while driving in tele, or when a button is pressed? Do you need it to run at 320x240, or can you run it on a smaller image (perhaps when close)? Do you need to do the convex hull and other operations on the whole image, or do you know something from previous images to help look where you expect things rather than look at the entire image?

And the profiler or your own instrumentation is the right way to look for how often code is running or at the CPU usage. The cRIO should not crash when run at 100%, but it likely means that some things aren’t running as fast as you asked them to run. That will affect the control performance, so you may want to think about how fast things are running to see if you can slow some things down or prioritize things.

Greg McKaskle

Isn’t there a way to off load the vision processing back onto the drivers station during a match?

Yes, but at a cost of increased latency. Camera->D-Link->field AP->DS Computer->field AP->D-Link->cRIO with 2 wireless legs instead of camera->D-Link->cRIO with no wireless legs.

As with most engineering problems, there are tradeoffs to be evaluated when choosing between various options.

Not completely sure but cant we have something else on the robot to co-process the image data received?

This is true if you use the dashboard to handle the processing, but you can use any network attached computer running smart dashboard. If you have an extra 400 bucks and 2.5lbs you could put a laptop on the robot minimizing network latency.

When discussing using laptops for vision, here are a few things to keep in mind.

The camera and cRIO are enet devices. They are already on a network with the dashboard. A laptop on the robot will be on the network too. The networks are different, but not as black and white as it may seem.

As discussed in other threads, you will gain yourself some additional CPU, but you will soon use that up too. Yesterday, I ran a team’s dashboard that pegged my core 2 duo laptop with a 640x480 image stream. I could only process 20fps and they were sending 30. This resulted in seconds of lag.

Think about this as a budgeting exercise. Can you determine what is possible with the CPU resources in the cRIO? If you add more CPU, what will it be used for? What will it cost in $ and lbs?

There are certainly tasks that will benefit from adding a laptop, but it is not magic. It should really follow the same process as adding a motor or pneumatics.

Greg McKaskle

What if the laptop was only used for processing the image to find out the distance of the target rectangle, but you did not stream any thing back to your driver-station?

The rectangle detection algorithms are the expensive process. I would advise trying to think of ways to locate the target that wouldn’t require opencv.

For example I am looking at the rectangle as two parallel lines. The distance to the hoop can be determined based on the lines and your robots heading. In my case I’m using the kinect but that’s more to limit the amount of field calibration that needs to be done.