My auto aim system depends on specific lighting, how to fix?

I just recently finished my auto aim system for the robot(written in java). My problem comes in with lighting, i wrote my auto aim system to catch reflective tape values and cull all other values and place into a binary file. I had expected everything to work perfectly when we brought it over to another school for testing, until i realized lighting is a huge factor in my code. The brighter lighting at the school caused my auto aim system to completely fail, no matter where i was.

How can i remove the ‘lighting’ factor from my auto aim function? I would want the function to work the same regardless of lighting of the room. Thanks!

The short answer is that you can’t. You can minimize the effect by overexposing your camera so that the view is quite dark all the time. Do that by shining a bright light into your camera while in the http interface and setting it to hold those values.

That is why some regionals will give teams time on the field to calibrate to the lighting on the field. Unfortunately, you’ll find that even the lighting differences between the opposite ends of the field can create problems.

In my experience, different lighting at different locales is about 90% of the problems most teams experience with vision.

You can try adaptive rather than global thresholding.

http://homepages.inf.ed.ac.uk/rbf/HIPR2/adpthrsh.htm

Setting both the exposure and white balance to hold values should minimize variations between different lighting situations. It’s not something that can be eliminated entirely though.

Some random vision tips:

-You really need a light on the robot to see the retroreflective tape. It has to be fairly bright and/or focused, and you have to know the color (the exact color is not important)

-You would then cal the thresholds around the color of the light, and overexpose the image to adjust the auto-exposure then hold.

-Do you really need vision?

Hey I had this issue last year and I did two things, first last year we had our vision system save images from the field so we could tune in between matches. Then this season we built a vision system that doesn’t rely on absolute thresholds.

The full source code is here:
https://github.com/KBotics/2809_vision2013

and I wrote an explanation on my website:
http://kevinhughes.ca/2013/02/21/frc-ultimate-ascent-vision-system/

Is it better to have a colored LED array, or is white sufficient?

Both will show up in the image just as well, but white is a very common color; fluorescent lights, sunlight, etc. will add noise that can’t be eliminated by thresholding. Vivid green, on the other hand, rarely appears with very high saturation values except when a green light bounces off a reflective surface.

+1000

As mentioned earlier, the camera defaults to use adaptive exposure and white balance. While useful for some situations, when you want to ignore most of the light except for what you are providing, I’d recommend calibrating the camera and turning off the adaptive options.

You can control this in code or from the web page. I’d recommend logging into the camera web page, setting the color to Fluorescent 1 or another fixed setting, and for the exposure, expose the camera to a bright light for a few seconds, then set it to hold. Feel free to look at other camera settings.

This should help a lot. Feel free to post photos of what the camera is taking with the ring light and we may be able to help more. Also, it is important to use multiple criteria for selecting the target. If you use a single measurement it will be far less tolerant of noise. Two measurement areas covered in the white paper are shape and color/brightness.

White light is much more common, so most of the work will fall to shape measurements. If you can use a colored LED ring, that won’t guarantee uniqueness, so I wouldn’t eliminate shape, but you will typically have far fewer elements to consider.

Greg McKaskle

Something else that might help is changing the color representation from Red-Green-Blue (or Blue-Green-Red, if you’re an OpenCV user) to Hue-Saturation-Value.

Thinking about it geometrically, RGB (or BGR) maps colors as a cube, with the corner on the origin being black, and the corner opposite diagonal from it being white. HSV, on the other hand, plots colors as a Cylinder. With Hue (the type of color, i.e. red, green, blue) being a value rotated around the centre of the cylinder, and saturation ranges, on has a color as it appears in all lights. If you set a wide range for the value (how light or dark the color is i.e. how close the image is to being black) you have a more light-difference representations of the colors that you’re looking for. Just be sure that you convert back to your original color representation notation, or you’ll run into problems there.

I hope this helps, i’d be more than willing to answer any questions you may have on top of this.

what we have learned from our vision experience is that it is a luxury. if you don’t really need it, you should not use it.

But on the other hand, if you do need it:

1: Calibrate all settings before the match during your time on the field. this includes white balance, exposure, and all values previously mentioned.

2: Find your threshold. the broader it is, the more it recognizes. If you have a vivid green LED as mentioned earlier, you can have a reasonably large threshold. ((I don’t know values off hand… sorry)), just note the larger the threshold, the more it sees.

3: if you are having issues, you may wish to work on some filtering. We have found that many little specs appear in our binary image, so we made a size filter.

PM me, or add me on skype if you have one. My team and I would be more than happy to help with anything we can.

I use ir lights with the kinect, 25 ir leds to be exact. Then threshold. Works beautifully.