I’m a mentor for team 2338. I apologize for somewhat a lengthy post, but figure this might help teams that are having camera issues as I noted a couple of teams (including us) were on the field for camera calibration this past weekend.
After endless number of hours troubleshooting the camera code, I think we have found a potential bug in the camera software that is causing the green color detection to be flaky.
I think the issues resides in the code where it takes an RGB image and convert it to HSL for color detection. At times, the resulting HSL image will output a red/pink hue value that is in the single digits (normal red/pink hue is supposed to be from 220 to 255, but the converted HSL image will indicate some red/pink pixels are in the range of 1 to 5. This usually occurs when the target hue value is near the higher value of 255. The only conclusion that I can make is that there is a bug in the conversion code where it is not converting the red hue range properly. I believe that this bug is causing pink detection to fail at times. When pink fails, green natually also fails.
Note that this was communicated to a NI rep in the Wisconsin regional (named Chris, I believe).
Now on to camera calibration to reduce “washed out” targets.
Not to say that this will work all the time, but I think our camera was tracking much better after following this process. Note that a lot of this information was gathered here from the likes of Greg and couple of other helpful folks:
Start the camera by setting white balance and exposure to “auto”.
Performing a white balance calibration and set white balance to “hold current”
This can be done by covering the lens for about 5 seconds and then placing a white sheet of paper in front of the camera after uncovering the lens. Wait for it to self adjust to the correct level and then set to “hold current”.
Perform an exposure adjustment:
This is where I tend to disagree with Greg on keeping auto exposure. We find it to work much better if we fix the exposure. The reason being that out camera is aimed relatively high, and it will catch some of the lamps that are used to light up the field. When and if that happens, the exposure time tends to reduce and it is those times that we obtain a correct target exposure. If the camera is aimed at an area where there is relatively low back lighting, the camera will over expose. Causing a washed out target to show up. To set the exposure, we aim the camera towards some lights to force exposure time down. While at the same time, we place a target in front of the camera to check exposure. When you can clearly see the distinctive colors in the target with lights in the background without much glare, set the exposure to “hold current”. Any additional glare can be reduced by setting a lower brightness level. The beauty of fixing the exposure is that the lighting is relatively uniform from the various angles, thus a fixed exposure will enable target detection no matter what the background lighting conditions are. Now check the HSL values for the 2 target colors and record them. Double check for proper camera exposure by aiming the camera (with a target in view) towards other areas of the field (check with and without back lighting).
After these adjustments, our camera was able to track the target properly. (Ignore the fact that we shot into our own alliance’s trailer in one of the matches…we were still troubleshooting our autonomous code :yikes: . I forgot which match, but one of our later seeding matches this past weekend we were tracking properly and scored in autonomous (into the correct trailer this time).
That should be it…hope this will help the teams having camera issues.
If this is the case, the bug is in the color threshold, not necessarily the HSL conversion. Hue in HSL is defined to be a continuous cycle, much the same way that an angle measurement is continuous: 360 degrees is “adjacent to” 359 degrees, but it can also be said to be “adjacent to” 1 degree. There are two options to that could be used to compensate for this: use a piecewise threshold, one piece for 220-255 and another for 0-5, or before you threshold for pink, shift all the hue values by ~100 mod 255, i.e. Hue’ (hue prime) = Hue + 100 mod 255. If you’re using LabVIEW, use the remainder output from the Quotient and Remainder function. Then just threshold for 65-105 or whichever range you need.
The wrap around of the red end of the spectrum is annoying, and I don’t know of a good workaround except to do a color lookup on the original image. This per pixel operation is of course not free, so to avoid it, I’ve generally selected a different white balance that will not wrap the pink color. From our testing Fluorescent 1 seemed to be a good all around choice, though as long as you are willing to adjust the other parameters, many of the other built-in colors and even a calibrated WB will work fine.
As for fixed exposure, if it works for you, great. I have had limited access to fields and when I’m on one I’m usually helping teams understand the lighting and camera settings, so I have left it set to auto believing that the camera will then be able to handle a larger variation of lighting. Do you have images taken with fixed exposure that you’d be able to share? The ones that’d worry me are the big lighting differences you get due to field orientation and with and without the black curtain behind the scoring table.
Fixing the exposure will give consistent timing to the exposure, but not necessarily consistent brightness. That would depend on how consistent the lighting is. As I said, to me the proof is in the images. If you have images, or if I can have time on the field I’ll try this out. I’m comparing what I have tested to what I haven’t but someone else has. I want more data.
On a related note, for those who have done experimentation with camera settings at events, how do you think the lack of stage lighting will affect the cam calibration in the Georgia Dome? Reference
I can only assume the top target washout teams have been experiencing will be basically eliminated.
The lights under the upper level act similar to the stage lighting, but it may be less even. It also depends on whether the ceiling curtain is open or closed. Atlanta is another reason why I’m curious to compare the fixed exposure to the auto.
Unfortunately, we did not save any images to illustrate the differences.
As for Georgia dome, it would depend on what your camera sees. If there are a large variation of back lighting conditions based on where the camera is pointed, then I would recommend fixing exposure. If the background is relatively the same in all directions, then I think auto exposure would work better.
I would argue that the variation in background would be much more than the variation of light hitting the target. Yeah, in some areas of the field you might have slight dimming, but it should not be nearly as bad as exposure change due to a black background vs. a roll of lights.
I would recommend the teams going to nationals to hop on the field when ever possible to check things out and test the different settings.
Bummer about the lack of images. I have tons, but they are taken with auto and there is no info about the exposure.
I agree with you about the lighting on field versus the background. I want to see the range difference when oriented different directions on the field and looking into the lights. I think that with the right calibration procedure, this may also produce good results. Hopefully better.
Because of the procedure, we still can’t make this the default, but we can document it as a good alternative. Specifically, the camera will often lose a few frames before it adjusts the exposure, and it would be nice if those lost frames were back.
If someone can capture images using the Camera Field Test utility, that would be great. I’ll see if an NI AE can grab them and share with everyone.