Quote:
Originally Posted by JohnBoucher
Did this testing period help anyone?
Who went onto the field and tested values and how different were they from what you had?
Any idea how this will play out in Atlanta?
|
We had no luck locking onto the vision target on Thursday of Boston regional. We were using the default values from Kevin Watson's camera code, which had worked fine for us last year and during build season this year.
Within a few minutes of being on the field, we rapidly realized the source of the problem.... The venue for Boston regional is an ice hockey arena, with electronic banners around the arena. The banners are at a height such that when you project the vision of the camera beyond the green vision target, the banners are in direct view.
They were running the banners with FIRST-related messages, most of which had a bright white background. This background had the same green component as the vision target (much like you get with florescent lights commonly found in, say, high school hallways

)
Most of the time this resulted in really big bounding areas for the "tracked" object, but low pixel counts, resulting in very low confidence values. This causes the camera servo handling code to continue to look around the venue for a better match, which it never finds since it is bombarded with large-area matches.
One match, though, we actually locked completely on the banner and the robot drove down along the side of the rack, trying to get "close enough" to score
We corrected the problem by clamping down the tolerance of our YCbCr values (R, G, B min/max) for the green light to +/- 10 I believe. This resulted in less pixel matches over the actual vision target, but we had far less false positives than before. Once this was in place, the servo tracking code had good enough confidence to follow the actual green vision target.
This was possible only because we had the on-field time.
After getting it working, we helped out 2 other teams having similar problems. AFAIK, lowering the tolerance worked for them, too. We saw at least 4 other teams also taking advantage of this on-field opportunity.
For those who have trouble using the LabView app for camera calibration, or simply don't have LabView available, we use the CMUCam2GUI application from the creators of the CMUCam (google it) with good results.