I’ve been using a ring light mounted on our camera and successfully can run the Rectangular Target Processing example. I integrated it into our code and it works like a charm. However, yesterday I worked in a room with different lighting. The camera was still able to view the rectangle but I had to re-calibrate (by clicking the color to process) in the image to get it to look for the correct rectangle.
What will happen at the regional competition? If the lighting is different, how will we be able to re-calibrate before getting on the field?
It’s not unheard of for teams to be granted a short period sometime before qualification matches begin where they can place their robot on the field and do vision calibration.
If you plan well, you can capture some camera images during your first practice match or two for off-field use as calibration tests.
Check to see how your camera’s white balance is set. If it is set to auto, then it runs an algorithm that samples the image from time to time and tries to return an image that is balanced. For security or photography, that is a fine feature, but it messes with the actual colors returned. You probably want to turn it off and stick with a given white balance setting, or even the Hold setting. Another thing that will impact color to a lesser degree is the exposure, the amount of light that the camera receives. You may also need to calibrate and hold that.
The most important thing to learn from this is that lighting is not a constant, and that you should test it in different locations and learn how the camera reacts. You will either find the camera settings that are consistent, or you will become good at calibrating quickly and accurately. Either will pay off at the event(s).