Curiosity question. Why do we only use green (and sometimes white) LED rings? Why not other colors? I get the intensity thing, but does it really make such a huge difference with cameras these days?
The other primary colors of RGB (which you want to maximize for exposure purposes) are blue and red, which are alliance colors and all over the field which can heavily increase misdetections
Dont know the exact validity of this, but I remember one of my mentors saying that green lights are also cheaper to make. However this is probably not the reason why First uses green lights.
From what I know, the reason people primarily use green LED’s is because the human eye has twice as many green receptors compared to other colors. Cameras are modeled after this, meaning that they have more green receptors than other receptor types. This mean’s it is the easiest to pick up a green color.
We’ve seen no performance loss using blue to match our team’s primary color in our shop, though we’ve yet to use it on the field.
The reason frequently given is to avoid false positives due to the abundance of red and blue on the field due to bumpers and such, but if your only goal is to track the retroreflective tape you should be setting your exposure low enough that this wouldn’t matter anyway.
We’d like to experiment with infrared in the future to avoid having an overly bright light (to the human eye, anyway) altogether.
That it true, but that’s more for detail. In this case, we moreso care about raw exposure and image feeds.
Our team uses an infrared LED ring for our vision processing. We got the idea from another local team who also uses infrared light for their vision processing.
Green would be a good choice to maximize SNR on silicon image sensors for two reasons. One is that the bayer filter pattern means twice as many pixels will receive the green photons than either the red or blue. The other is that silicon quantum efficiency peaks in the green spectrum.
If you haven’t noticed differences with other colors, try shortening your exposure time. You should be able to make it much shorter with a green LED and still maintain good SNR.
This was the reason that I had always assumed. If you look at how a CCD is laid out, it has twice as many green sensors as red or blue, to mimic how our eyes work. If you are trying to get visual data into a CCD, you would want to illuminate it with green, since it’s already designed to see that color best.
Most CCD cameras come with infrared filters, so you might need to take apart the camera before you get the results you want.
I do think this would be a good path to pursue.
Frankly, if you’re using a CCD on a robot, you’re both paying too much and about 8 years behind the times.
Modern active pixel CMOS image sensors often have the IR blocking built into the coverglass on the sensor package and sometimes have it built into the microlenses on the sensor itself. So, it’s generally not easy to remove.
@thatnameistaken You shouldn’t need an obnoxiously bright light to get FRC vision applications to work well. It’s more important to put the light in the right place and to adjust the exposure time correctly. If it’s really bothersome, just switch the light with an N-channel FET so it’s only on when you need it.
Aside from the stuff discussed above, red spans two separate sets of non-contiguous numbers in HSV color space, so that’s extra work for no real benefit.
@mrnoble I don’t know the HOW of this but I think I can speak to the WHY of your question. The best explanation I heard came from team 1640 Sa-BOT-age explaining their vision processing steps. They use Green LED rings because as others stated because red and blue are alliance colors. When they take the image, they then filter out the red and blue wavelengths. Then they convert the image to monochrome (B/W) and contour (intensity) around the bright areas (usually the reflective tape target that we are searching for.) By focusing only on the green wavelengths, they reduce the possibility of getting a false positive around a red light or a blue light or some aspect of the game field. The light isn’t just about illumination of the target which is why WHITE works, but isn’t necessarily the best choice. I suppose you could use a non-visible spectrum wavelength (provided it’s allowed by the rules) and accomplish the same result by this method.
But 1640’s explanation was the best I had heard and seconded by several other teams. Hope this helps.
Green gives far less false positives in an FRC environment. Imagine your robot driving to place a hatch on another robot because of their bumpers or an accidental reflection.
We justify the use of green by evaluating the stadium lights that are typically used by FIRST. The older incandescent lights impinge on the IR/Red range. Less so, but still notable is the blueish white from old LEDs still sometimes used. You could, in theory use UV, but when we used it (way before my time on the team btw) it didn’t work out particularly well
I’ve attempted to convince my team to use near-IR light for the purpose of it being outside the visible range, and potential to increase contrast without noticeable brightness, but really the COTS Limelight solution or other software that subtracts or highlights actual COLOR from the acquired image frame as a post process within software is the easier, more effective solution. IR isn’t a color, by definition outside the visible spectral range and so it cant be subtracted/highlighted out without other means like blinking and frame subtraction.
personally i really hate seeing the bright green (or any color) high power LED’s in my eyes, even from a distance. the robot across the pit from us with a LL is still too bright for me, yet alone our robot turning on in front of me. i doubt FIRST could do much to regulate “brightness” as its somewhat subjective without expensive metering tools. I’ll just live with it.
Two benefits of using something in the visible range are you can tell whether the light is on for debugging or to use for signalling (see 254 in 2017) and you have more choice of cameras (since the vast majority of cheap cameras are designed for visible light).
Others have discussed why green in particular.
Regarding near-IR light increasing contrast…what makes you think that will happen? Unless you have a specialized sensor, you are likely to lose contrast and resolution since a) your SNR will be lower due to lower QE in the IR spectrum and b) mixing of light into the different colors due to the lack of specificity of bayer filters. That’s why color cameras have IR blocking: to keep the colors “True” because the IR light will penetrate each of the color filters to some extent.
If there’s a team with some obnoxious high power LED out there, they likely don’t understand one of the following:
- How a retroreflector works. It returns light to the source with high specificity, so if you aren’t putting your camera inside the center of your ring, you need tons more light than if you align them correctly.
- How to set exposure time on the camera. A well aligned light ring means short exposure so the target lights green and everything else is dark. If the target is lighting up white in the image, then the exposure time and/or illumination intensity is too high.
- How to properly separate color planes and then threshold in the image processing algorithm. If you just grab the G pixels in the RGB image, you still carry around a lot of data from adjacent colors. Changing to the HSV color space and then selecting exactly the green that matches the illuminator really cuts down on spurious features in the image.
Long story short: Don’t get creative about spectrum, just use the right illumination with the right camera settings and everyone can be happy. This is really a matter of education, and the vision area of FRC is sorely lacking in folks who can explain the principles well.
The less scientific (but closer to what I think you’re asking) answer is that back in 2012 teams did a lot of testing and just found green to be better. As such the recommendation spread to teams was to use green LED rings and it more or less just stuck.