Hello! My name is Dalton and I’m the co-lead programmer for Grizzly Robotics. This is the first year our team has had vision tracking for our robot since 2005 and I’d like to share it with you guys. Our vision code has accuracy, reliability, and flexibility. It can be used with both a USB or IP camera, track any number of targets based on color, size, and aspect ratio. It can also switch what it tracks based on data sent to the program. Our team uses it on a Raspberry Pi3 Co-Processor and we’ve have ~96% accuracy.
Note: This is on my personal repository as the tower tracking code has been untested. Based on my mathematics it should work, but as we programmers know, ‘Should’ should never be trusted.
Here’s the code and I hope you guys make good use of it!
Reliability - Because our code relies only on a NetworkTable and OpenCV, it has multiple checks the determine if the target is valid compared to more lose solutions. Even more can be added such as advanced filtering based on backgrounds.
Flexibility - Since it’s only in OpenCV and pure python code, it can be used in numerous games and scenarios! Even outside of FIRST! It can track any number of target with many different methods of filtering such as color ranges, aspect ratio, advanced background filtration, etc.
Stability - Since there is no need for a driverstation to interact with the robot, our code runs on our Pi bootup. This allows for a vision code to run in unpredictable situations. Such as another robot’s autonomous pushing our robot, or if our driverstation loses communication during the match. As long as autonomous is enabled and robot turned on, the python will track the target.