Vision - How to deal with really bright light in reflections

This year my team is considering using vision to aim the boulders at the high goals. If we do, this will be the first year we use vision to aim like this, although I’ve played around with vision stuff in openCV before.

I’ve downloaded the sample images of the field and tried to build a pipeline in GRIP that can pick out the goals reasonably well with some success. However, I’m having some trouble where in some shots the reflection on the tape is so bright, it appears almost white. Unfortunately, these are the shots where the robot is facing the tape head-on, which seems like the most likely case if you’re actually trying to line up a shot.

I have a few ideas for things that can mitigate the problem, and I was wondering if anyone with more experience could weigh in:

  • Doing two thresholds, one for green and one for white, and taking the bitwise OR of the two. I’ve tried this and it works ok, but then it includes many white lights in the background as well. I was thinking it might be possible to filter these out later in the process using some other criteria. For example, since the goals are U-shaped they should have a relatively low area compared to their convex hulls.
  • Turning down the sensitivity of the camera. Does this easily fix it or are the pictures taken on lowest sensitivity? What about using a different camera? Are the microsoft or axis ones better at this?
  • Dimming the lights on the robot. Can the light be dimmed using pwm, or feeding them a lower voltage, or just covering half of them up?

It seems this is something that stays the same every year, so I’m curious to hear how teams who frequently use vision have handled this.

Maybe use a different color LED? I know the sample images are used for green LEDs, so you would have to change thresholds. You also could adjust the settings on your camera. I would recommend getting your own retro reflective tape for testing in your build area. Be aware that lights do vary from competition to competition, so you may have to adjust your thresholds at each competition.

Not sure how helpful my comment is going to be but try switching color spaces from RGB to HSV. It gives you a different way to extract data from the image. If you are familiar with OpenCV then it’s a pretty simple task.

My suspicion is that you will be able to drop your exposure much lower than the the samples provided. The whiteness, as you suspected, is likely due to over exposure. Also the setup used to take the samples use has two light rings rather than one, which is also likely contributing to the over exposure.

Also test it in as many environments as you can. Test it in the gym, outside, under stage lights, etc. Try to mimic the actual playing field (i.e. the diamond plate at the bottom of the player station can totally through things off).

Finally, if you opt to use a different color LED, beware of the tower and castle light color. These may affect the reliability and performance of your vision processing significantly.

As indicated by others, underneath it all, you are encountering saturation, and ergo probably blooming, on the sensor. There are a few ways to deal with this. All of these options will bring the intensity in that part of the image down to a more reasonable level.

Key point: since these are retroreflectors, they will reflect light back to the source most effectively. While your illumination is the dominant source in the environment, there are others.

So, potential approaches:

  1. Decrease illumination time or intensity, or the camera exposure time, as suggested.

  2. Decrease the image intensity by picking an illumination wavelength where the sensor has lower quantum efficiency, like purple, red, or infrared.

  3. Provide a filter for the input to the camera that matches your illumination color (a piece of green plastic in front of the lens, if you are using green illumination).

  4. Use a polarization filter like these: http://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=7081 (Full Disclosure: I am a Thorlabs employee) If you experiment with the retroreflective tape, you should find a combination of input and output polarization angles that cut out a lot of the specular reflection coming off the tape and other surfaces. Set up a polarizer over the source in one direction and rotate another polarizer in front of your camera to find an optimal signal. Retroreflectors change the polarization of the light. So, if you illuminate with one polarization, the signal coming back at you will be a different polarization. Specular reflections don’t effect polarization.

Thanks for all the replies. I wasn’t aware that the samples were taken using two light rings, that’s reassuring. Hopefully pretty soon I can get access to some actual lights and tape and see how it goes.

You might want to look at the tutorials from Roborealm from last year’s competition:
http://www.roborealm.com/FRC2015/index.php