Dealing with bright lighting at competitions and vision

This weekend we competed at the North Star regional and will be progressing onwards to Championship. However, throughout the regional we struggled to get vision tracking on the tower working due to other lighting at the event reflecting off of the tower and conflicting with our green LEDs despite tweaking of our HSV filter. Currently we are running an HSV filter on the image from our camera and then comparing the area of each particle found by NIVision to the area of the convex hull to eliminate solid lights found in the background. Is there anything that we could do to improve our recognition, especially for under the lighting at Championship?

Your flaw is relying upon a HSV filter (and only a HSV filter) for identification of your target. That’s like saying any orange car is a Dodge Charger. What are some ways you can verify that it is actually a target?

As I said in my post, we are also comparing convex area to the area of the particle found by the filter. I also forgot to mention that we have tried checking how close the particle is to a trapezoid along with checking the aspect ratio. Our main problem is that even with our LED ring the lighting at the competition can overpower our LED, causing no green light to be visible in our images.

Based on past experience, I recommend using Purple. There’s too much green/red/blue at competitions and around towers.

Sounds like you need to lower the exposure and/or brightness on your camera or put a filter (some teams ave used sunglasses in a pinch) in front of it.

Other than sunglasses, what would you recommend to use as a filter? Should it be a particular color?

A “neutral density filter” would be the correct thing to reduce light entering the camera without modifying hue.

If you have an image, that would go a long way towards being able to make a good suggestion.

The LV example uses aspect ration, moment of inertia, area/convext hull area, and an X and Y profile mask. None of these are expensive to calculate, and all of them help to compare analytical values of the U shape that is expected versus the computed values in your image.

Greg McKaskle

Exactly why we’re using orange.

Team 303 had a similar issue at our competition last week. Im not convinced its the best way to solve this problem but we just added more and brighter green leds until it overpowered the reflection off the glass. In the end we had three of these rings concentric of each other.

https://www.superbrightleds.com/moreinfo/led-headlight-accent-lights/led-angel-eye-headlight-accent-lights-cob/1135/

They really are super bright, its pretty crazy. We also had a paper cup with white gaff tape on the inside to direct the green light. I couldn’t find a good picture. Here is the best I could do:

Google Photos

Here are a few images from on the field. Our two main issues were from cases like these two perspectives where lighting reflecting off of the tower would merge with or overpower the reflection from the LEDs or the reflection from the LEDs would not be different enough from the surrounding tower to be distinguishable using HSV values.

2016-04-07-105446.png
2016-04-07-051957.png


2016-04-07-105446.png
2016-04-07-051957.png

You can program an exposure value, how do you implement your camera and vision processing?

It also happens to match your color scheme!

We use IR LEDs, an Axis camera, and a strip of developed film to cover the lens. Works great. All the camera sees is the IR reflection. Midwest had a banner running right behind the goals that could have washed out some other methods. Our camera couldn’t even see it.

I believe you are allowed access to the playing field on day zero for camera calibration. At least, that’s what we’re doing. We use the green light ring too and I’ve seen camera calibration be the difference between silver tape and green tape in the image. Changing the camera settings for the field should make a huge difference if your image filtering is working fine at your practice field/workshop/etc.

We use a green ring light and a single HSV filter, then use GRIP to find contours, filter out ones that are too small, and publish a report to a NetworkTable. The system has worked really well everywhere but Iowa, where the light was just too bright for our camera.

At Iowa, we were having similar problems. There were very large windows on either side of the field, and pointing a camera at the goals from our low bot was impossible without getting them into frame. Even with minimum exposure settings on the Axis camera, the windows were always the light it set the exposure based off of. We share a shop with 3130, who use the LifeCam, and they were having similar problems.

At North Star, we had no problems, and neither did 3130. We both used green ring lights, and they were bright enough to get a very good picture back from the camera. If you want, I can get you in touch with a member from 3130, as I’m not sure what they did with the LifeCam to set it up.

Thanks for posting the images.

The images you posted have virtually no green in them because the colors are being washed out due to the bright lights. The tape is essentially the same as the tower’s gray colors. Despite what CSI shows like to claim, you can’t fix everything by tapping computer keys or waving your hand in front of your face. Image processing is best done on images that are in focus, have good contrast, and don’t contain lots of extraneous details. For color images, contrast also means good saturation (not washed out).

If you compare to the pictures that are part of the example code, you’ll see that they are darker. This is affected by the exposure and brightness settings on the camera, and as others have mentioned, you can also use a Neutral Density filter (a gray plastic sheet) to block some of the light similar to sun glasses. This will increase saturation of color and allow the HSV filter to mask properly and not include bricks and mortar of the tower to be considered part of the target.

Also, the retroreflective material returns light emitted near the camera lens to the camera lens. So if you have mounted something else like a flashlight near the camera, you are diluting the green and washing it out.

Once you have a high contrast, saturated image, the task is far easier, and the rest of your code should work much better. If you need more help setting up your camera, post the camera setup code or a description of it.

Greg McKaskle

We ended up using two pieces of polarizing filter material. Adjust the angle between the filters until you reduce the intensity enough to see the led color in the target.

This worked great on the practice field but on the field we shot high or low. The horizontal tracking was perfect but our ranging was off.

Greg

We found these instructions for tuning our axis camera settings very useful

Definitely change your camera settings. After playing around with our axis camera settings using a model of the goal our images went from something like what you have to an image where everything except the brightest sunlight was too dark to be anything but black, and the reflection off of the tape is a distinct shade and color unique to the LEDs you use. You would have to re-adjust HSV values in your program after doing this, but it provides an easy method of reducing the number of potential targets in your image.