Vision: Object Identification

So first of all, I understand that this has been brought up alot on this form and I apologize if I am being repetitive.

It seems that in all of the years that I have participated in FIRST, a reliable camera tracking program that didn’t slow anything down too much would be a metaphorical “golden gun”. For instance in 2009 if we could have IDed the pink and green markers and in 2010 if we could have identified the circular goal markers. Now with the introduction of cRIO 2.0 this might be possible. I have read over all of the NI documents on vision tracking and I have been our teams primary programmer for the past 3 years and I still have no idea where to begin. Does anyone have any advice on how to get started or have some reading material besides the NI tutorials?

Thanks,

Patrick

BTW we have the old axis camera from '09. Is the new one significantly better?

The new camera is fixed focus, meaning it is never out of focus. It is a bit smaller, and it has some additional features that WPILib doesn’t take advantage of. Other than that, it is very similar in performance. In fact, I think that the older camera may have better color saturation than the newer one.

The cRIO-II is essentially the same speed as the older model, just smaller and cheaper. It does have more RAM, and the processor has more or better caching, but I have not seen dramatic differences in performance.

The reality is that vision processing is a valuable tool, but a hard problem domain. It works best when the scene being observed is highly controlled and/or predictable. It also benefits from having high resolution, high speed cameras feeding into high performance, dedicated processing. FRC has a basic camera, pretty limited processing power, and very unfriendly field for vision – perhaps the floor could be covered in reflective diamond plate or Lexan as well :slight_smile: .

From your post, it isn’t clear if your team has attempted to work through the NI examples and tutorials. Similar to other elements in the kit, they will not guarantee a golden-gun, but they can be used to enhance your robot’s performance for specific tasks. Plenty youtube videos show teams successfully using the camera and vision processing.

Please describe issues you have had in the past or ask specific questions, and I can help point out.

Greg McKaskle

I did get a good vision algorithm working in a few hours for the dual color flags that ran very fast on the crio. A co-worker and I got a working example last year, but didn’t have a team which had robot that could utilize it. We had a lot of tweaking to do, and it didn’t run fast enough. maybe this year’s game will make the problem to solve with vision easier.

/sourgrapes mode

Ever since 2007, people have asked for the lighted vision targets back. 2010 was a huge improvement with the white/black circles and was doable. In 2011, the gray retroreflective tape placed on and around gray reflective aluminum was pretty unusable for most teams. Now, we get dark gray reflective tape next to black all placed on smoked lexan with bright gray aluminum rectangles behind it.

I sometimes wonder if the FIRST GDC has a single controls/vision programmer left on the committee. :mad:

While I’m at it, someone tell the guy doing the prints that measurements for WOOD construction don’t go out to the second decimal place. XX.88 or XX.31 are not really valid dimensions for carpentry work…

/end sourgrapes mode

The tape is retro-reflective. Attach some bright LED’s to your robot to light them up in any color you want, and you can have some very bright colored rectangles be easily seen by the camera at any angle. If you choose something other than white/gray/silver/black, like red or green, it’s simple work to extract that color and analyze the rectangles that result.

There should be a vision white paper on the NI site later today, and it is currently on FIRSTFORGE in the documents section. As mentioned, the key to the retro-reflectors is to have a reasonable good ring-light on the robot. If you do that, you have your lit targets.

Please work through the discovery experience described in the white paper and let me know if you still feel the same way.

Greg McKaskle

Greg, Which documents section? I have access looked in the 2012 Beta and Community projects and didn’t see it.
EDIT: Never mind, it is on the NI site now, I did not hit refresh :-/

Thanks

BTW, we did not have issues in either Lunacy or Breakaway using vision, though the (Java libraries were a bit behind C++ IIRC). Most of the time I saw people trying to use too high a resolution and too high of a frame rate. 320x240 and 10FPS was plenty and did not bog things down.

I agree that it is better to run the camera resolution and framerate at values needed to control the robot, and not much higher. The paper does a little analysis of the distance where the targets cannot be processed for a given resolution. It doesn’t do anything with rate, but this year, I suspect it will be most common for both target and robot to be stationary in most situations.

Greg McKaskle

What color of LEDs should one use? I believe IR was used in the vision example from last year.

The white paper last year used a string of indoor LED Christmas lights inserted into foam. They were red LEDs. This year, the paper points out some products from SuperBrightLEDs.com, which, while designed for cosmetic purposes, work quite well as ring lights. I’ve tested green, red, amber, and blue. All colors can be made to work provided you get the white-balance and exposure settings such that the reflected colors are very saturated.

If you choose to avoid the color masking and simply go with a brightness threshold to form the mask, it doesn’t matter what color you use, and you can even mix colors or use a white light. The LV example vision code for 2012 shows both color and brightness approaches. For brightness, it also sets the camera to disable color, receiving a monochrome image directly from the camera.

Greg McKaskle