Image processing

I have had a look through the forums on the image processing topic trying to work out if we should process images on or off the cRIO. For starters, I would be looking at processing the images on the driver station; probably on an upgraded laptop from the one that comes with the rookie kit. As a rookie team, would the veteran teams recommend one direction over the other. My current thinking is removing the image processing from the cRIO will make for simpler robot code. Any thoughts or suggestions would be appreciated.

The image processing is not really the hard part. The hard part is making the robot do what you want once you have the data from the processing.

If you process an image on the cRIO, you will want to do it then move the robot and shoot. Or maybe do it again to verify and then shoot. Doing 30frames per second is not easy on the robot and not needed unless you are shooting while strafing and spinning.

If you process on the laptop, you have more CPU power and your approaches widen. You can use more edge detection, larger images, and be less efficient. Once you have the data, you have to get it to the robot.

Depending on the language used, I’d first go through the tutorials and learn the problem. The solution, whether on or off the robot is quite similar.

Greg McKaskle

Are you allowed to have image processing on the laptop? I thought there was a rule about only the robot can do image processing, or maybe that was for something else :confused:

Did you actually see the rule, or did someone tell you about it. I believe you were misinformed. There are many ways to do image processing.

Greg McKaskle

We are somewhat interested in this as well. More so, is there a white paper or reference that describes how to do processing on your driver station? What we don’t understand is how to transmit the data from the driver station back to the CRIO once the driver station has processed the data. Any document that describes the nuts and bolts of that packet communication would be appreciated here.

You don’t need to send the entire image back to the cRIO, only what you learned from the processing. For example, if you did your vision processing and determined the target was on the left, you would simply tell the robot to turn left. Just like any other control.

The intent is for teams to share data between robot and dashboard or other program using SmartDashboard. I believe the 2013 white paper discusses this, but I’m not sure if the demo code uses it. I know that the LV code doesn’t show how to transmit back, but it is quite easy this year.

Greg McKaskle

In the past vision processing has mostly been done completely on the robot. Last year doing the tracking on the driver station laptop became really popular. Our team did our tracking on the driver station last year and this year we’ve made improvements and are tracking on the laptop again. It’s driving the robot and working 10 times better than when we tracked on the robot.
One thing we had trouble with was getting the “target data” array to the robot. We ended up using a DCP port (port 1130) to send the array to the cRio. I have attached the two VIs we used (I downloaded them from another thread). You’ll use the “DCP Send.vi” in the dashboard program, and you’ll use “DCP Receive.vi” on the robot.

I hope this helps!

UDP Receive.vi (6.67 KB)
UDP Send.vi (6.86 KB)


UDP Receive.vi (6.67 KB)
UDP Send.vi (6.86 KB)

Ok, neat-o. I wondered what smartdashboard was, I haven’t dug into this yet but I understand the concept. It sounds like it’s a gateway. That should make it nice, I’ll read up on smartdashboard. Thanks Greg.

I think you accidentally merged TCP and UDP there. The files you attached are fine for UDP communication, though.

The SmartDashboard (and underlying Network Tables support) communication library makes doing it that way unnecessary, but they’ll still work.

Team 341 (Miss Daisy) released their image processing code from last year as a paper: http://www.chiefdelphi.com/media/papers/2676

It ran as a SmartDashboard extension, used OpenCV to process the images, and sent the information back to the robot using NetworkTables.