Quote:
Originally Posted by Rich Kressly
Attached are the cam values we used from calibrating with Greg from NI at lunch on Thurs. to score in autonomous in DC...the lunchtime calibration on Thursday is what allowed us to consistently track and score on targets in auto.
|
I am going to expand on comments from Team 1712 Mentor Rick Kressly, Sean Lavery, and National Instrument's Greg McKaskle, as posted above.
Greg/NI was available during lunch on Thursday, at the DC Regional, to assist Teams who wanted to calibrate their cameras, and adjust parameters in their softwares. We jumped at the opportunity to field test our camera settings and vision VIs (LabView). For Team 1712, that meant varying key settings in our implementation of the "two-color camera servo" example code. Read Greg's reply carefully and take a good look at his posted images and masks.
The field's lighting at DC was much brighter than the lighting back at Lower Merion High School, or in the DC Regional pit areas or on the practice field. For an extreme example of what the bright lights could do to a target, as viewed by the camera and software, look at Greg's image labeled "Default West" - read front glare.
On a related point, the DC field's lighting consisted of 2 high-mounted banks of can lights aligned with the long sides of the Arena, aka the Crater. This could create bright sides on the pink/green target - read side-glare - leaving a "shadow" down the center. For a somewhat similar mask to what we were experiencing in DC initially, albeit a more extreme example, look at Greg's image labeled "Default SW Corner mask."
Prior to lunch on Thursday - Team 1712's first autonomous run indicated that the camera/two-color tracking software locked on nothing - even when targets were directly in front. On analysis = green was never recognized - in essence, our hue settings were too high. With Greg's general guidance, and the sample pictures similar to what was posted above, Team 1712's coding crew talked over our proposed adjustments, played with lowering the brightness value, lowering the green hue's upper and lower range values, adjusted the lower red saturation value, and adjusted the servo speed/ranges. All this activity took one busy lunch hour. Part of Team 1712 then spent the rest of Thursday watching the Dawgma Team's robot "Alice" begin to hone in during the next several matches, as we tested and refined our approach. Tom Line, also in this thread, lists the key tuning steps that Team 1712 also utilized to refine "Alice's" vision.
I can definitely agree with Greg and Rich on a couple of interesting issues. When "Alice's" camera stared at the black background behind the big-screen TV - we believe the camera adjusted enough to void out one Thursday match. And what eventually worked for "Alice" in autonomous mode on Friday and Saturday might not apply elsewhere. Best advice -- grab a hold of any available time to test on the real field. Using Greg's/NI's snapshots of targets and masks for ideas, and understanding how the brighter lights, the darker backgrounds, and the software settings were interacting and affecting "Alice" in DC -- we eventually developed better and better masks for "Alice" to use in tracking and scoring in autonomous mode.