Camera Targetting

We started playing around with our camera. After a bit of configuring we got the hardware set up and ran the example vi ‘Rectangular Target Processing’. We got the image to pop up (much to our joy) and after changing some settings we able to lock on to the reflective tape. There was no noticeable lag and we were feeling pretty good. Of all the information given by the camera we liked how it gave an accurate reading of the distance from the camera lens to the reflective tape and backboard.

We then decided to implement the example into some basic framework by really just copying and pasting. We wanted to see how the values changed and reacted if the camera was on a driving robot versus a stationary one. Once we got the example plugged in correctly, we ran the code. Unfortunately there was a large delay, something close to 10 seconds. By contrast there was no delay on the camera image feeding directly into the Driver Station.

We really only want the distance feature. Everything else is nice but we wouldn’t use it in competition. We realize that if we remove all the unnecessary vision code then we can probably minimalize or remove the lag. At first we started taking away small chucks and then checking to see if distance feature was still working. This, understandably don’t go to well after a little while.

We were wondering if anyone knew the best way to cut back everything to only get the distance reading. In addition to the lag issue we don’t like having all this extra code that we aren’t using. Any thoughts? (particularly asking Mr. McKaskle…)

It sounds like you probably copied the PC specific code to the cRIO and you are running the MJPEG that is in a single icon.

There is a Tutorial on incorporating vision. It basically tells you to open the RT specific version of the example, try it out, then open the Vision Processing VI and perform a Save As operation to move the files to your robot project. This technique will avoid having a project that points to old stuff and will use the RT specific way of getting images from the camera.

Post back if you still have issues.

Greg McKaskle

Try using the rectangular vision example found under the frc examples. It allows you to pinpoint the x and y values along with the distance.

I may be mistaken here (I messed with vision but have had to backburner it to solve other problems) but I believe the bulk of the image processing is in two functions the convex hull operation, and the edge detection.

To my understanding the edge detection is what finds the numbers that the distance, X,Y coordinates, ect functions use, and they’re little more then some quick math. So you can’t pull out edge detection.

The other thing to look at pulling out would be the convex hull operation. I believe edge detection works better with it, however it might work without it. So you might be able to pull out that.

I think all the other functions are just quick math, you could pull them out but I don’t think it would solve your problem, at least not much.

The other possibility is to do the vision processing on the driver station and send data back to the robot via either TCP or UDP.

I hope this helps, good luck getting everything working!

Close, but let me restate it a little bit. The key operations are the threshold (color or brightness), the convex hull, and the particle analysis. The example does not do edge detection, though localized edge detection of the particle on the original monochrome image may improve depth information slightly and may provide more info about the location on the field.

These key operations also tend to be the expensive operation, along with the image decoding from jpeg. I believe that convex hull tends to be the most expensive operation, with threshold next, and decoding next, but the cost of each is somewhat dependent on the data in the image and the parameters on the calls.

Greg McKaskle

Okay this makes sense. Instead of just copying the code in we imported the VI. This reduced the lag from ten seconds to two or three. This is manageable and we are pretty sure we found a way to reduce it a little bit more. Thank you (as usual) Mr. McKaskle!

Glad to have helped, but that is still lots of lag. You should have less than a quarter second. If you open a panel or probe, you will notice them being a bit choppy, and they will have more lag, but not several seconds. To send the panel or view processed image to the PC, the cRIO has to compress and transmit the image on each update, and this will slow it down. It will run faster with complex panels and probes closed. At that point, you can view simple displays such as the Elapsed time subVI or the distance or target location and see if that is truly lagged by several seconds. Also, your dashboard should not be lagging by more than 150ms or so.

If this doesn’t match what you are seeing, let’s figure out what is going on. Perhaps post some code.

Greg McKaskle

All we did was “Insert VI” and then plopped the Rectangular Vision Processing Example in. When I probed some things, everything seemed to be slowed down. To reduce the complex panels we disabled the original image and processed image that showed up on the Front Panel. Unfortunately when we did that we couldn’t tell when the image was tracked based on the view. We could only guess when a target was tracked based on when it gave a distance. However, based on that the lag seemed to go away to the point when it wasn’t noticeable.

We have never had camera lag on Dashboard.

Okay we did so more testing with a moving robot. The delay is actually is noticeable. We think it is a little over a second.