I’m just curious as to if it was possible to have your camera take a picture at the very start of auton. With that image, look at the led lights and determine if they shine yellow or not. If they do, proceed to shooting sequence. If not, wait 5 seconds and then go.
The reflective target will be easier to see only because the contrast to the surroundings will be higher. The yellow lights will be detectable, but your field of view has to be wider (if you are looking for the whole ring), or you need to think about how to filter the image for “only yellow” with some fudge room. and detect the two horizontal lines made by the lights.
My overall experience with the vision system is that looking for individual colors can be extremely finicky, depending on the lighting on the field and whether your robot feels helpful on a given day. Also, it can be hard to correctly analyze an area of color that does not have a well-defined shape.
Bright green LED’s tend to be picked up well under most lighting circumstances, though - our team has successfully used a bright green ring light the last couple years. I would recommend using the dynamic vision targets with a bright light of some sort.
Since the LEDs are very bright and very small, the auto-exposure algorithms tend to overlook them and collect too much light in those areas. The result is that the LEDs that look yellow to your eyes will appear white on the computer screen. You can adjust the exposure by lowering the brightness parameter or manually calibrating the exposure. You may then be able to see the yellow LEDs using the camera, but at that point the rest of the image will be quite dark.
If you can set the exposure low enough, you can process it based on the color of yellow detected by the camera. Another approach is to process it using the brightness and ignore color.
As mentioned, this is certainly a valid way to look for the target. It doesn’t require a ring light, but since you don’t provide the light, you will likely find it to be less robust. But by all means, experiment.
Adding to this question: Does anyone have any concerns with the variation in light affecting seeing the leds? I personally don’t think slight lighting variations should make much of a difference, but I am interested in other people’s thoughts on the matter.
What you really care about is what color the camera sees when it captures the LEDs, not what color the LEDs produce.
If you had a perfectly calibrated camera, knowing the LED wavelength would be all you’d need. But we don’t have perfect cameras or perfect eyes.
One of the first things the cameras do is to combine the component light values into a colored image assuming a specified white balance. This is there to adjust for different ambient lighting such as outdoor sunlight versus indoor fluorescent. But this shifts all colors in the image. It doesn’t know what pixels represent an LED producing light source and the pixels that are reflecting ambient light. The camera offers different white balance settings including an auto setting that analyzes the image and calculates the most likely type of ambient lighting. But all of these shift the colors, breaking simple math comparison approaches.
I think the first challenge is to determine if you can even get the camera to see the colors. Bright light sources saturate the sensor, resulting in a white spot, not a colored spot. Perhaps you can simply use the monochrome spots as a template for image recognition?
I personally wouldn’t try to see the color specifically but rather the brightness. You could align your robot left or right of the center of the goal, but in a position that the camera can see both the left and right sides of the I. With some pretty simple brightness filters, you should be able to detect if your side or the other side is hotter.
If you are only using this to detect hotness, then I would also align the camera so the target is in the top portion of your view, that way you can filter out the lower section. Given you have a line for alignment you may even want to filter out some of the area in the middle of the goal to avoid noise from the crowd etc.
If you’re using rgb, you could use a custom threshold like minimum of rgb, or if your camera sees the yellow you could use a min average threshold of RG.
Then split you screen into three columns and calculate the average brightness of the three. If you’re in the hot side two of the three should show up as hot, if you’re on the cold side, only one of the three should show up hot.
I also second rsisk’s statement. The light’s you use really won’t make much a difference, because you will need to have a pretty good filter in place that works in many different lighting environments to minimize calibration necessary on the field (I’m guessing the practice field won’t have this lighting).
Also remember, you have a 50/50 chance at hitting the hot goal so it’s only worth on average ~2.5 points, so take that into account when you prioritize your programming tasks.
Tightening up your color constraints too much is a mistake. Ambient lighting definitely will affect your vision.
In addition, don’t rely on just seeing one shape.
The most accurate vision systems will be those that combine the two approaches. They will look for the correctly color using fairly wide open constraints, look for shapes with fairly wide open constraints, then check the shapes position and verify their colors.
In other words, your vision system should check that what you see is around the right color, about the right shape, and has approximately the correct spacial relationship to the other target (vertical versus horizontal). This will result in a much higher % chance of you identifying the correct object and state.