Hello everyone. As an offseason project, our team has been looking into robot detection, especially that regarding with PhotonVision.
So far we’ve compiled a Google Colab notebook with a few edits and changes from open-source Jupyter Notebooks and Google Colabs online, but I’m wondering if anyone knows of a simpler way to do all this.
Meaning, is there any sort of plugin or already developed & proven software that we could implement? Or maybe some sort of library that we may be missing out on?
Thank you to anyone who replies to this thread. Highly appreciated.
Dataset Colab has 4805 images of bots, as well as some models (I’m not sure if you can use these models directly with PhotonVision, though - I don’t have any experience with PhotonVision). This could be a useful starting point.
A couple of alternative approaches have occurred to me:
(1) there is some obstacle in view and it is not a game piece or a field element, therefore it is likely another robot, and/or
(2) just look for bumpers. Robots come in many shapes, sizes and colors but bumpers are pretty constant and if you see a bumper, then what’s on top of it is highly likely to be a robot.
Whenever the topic comes up I always side with the bumper detection approach. Seems like the most robust and trustworthy.
Regarding the first approach, regular cameras don’t have a sense of depth, so defining “a nearby blob” might be difficult without including random, big, far-away objects.
Yeah I also think
bumper detection > robot detection
BC, a bumper’s height, shape, and size can be considered constant. So you can use things like linear projection or solvePNP to find the accurate position of other robots.
I don’t have anything to do with programming and this part of my team (who runs dataset collab) but I do know we don’t run photon vision so unless there is a team who uses dataset collab and photon vision who could help, I’m not sure if our team would be able to help use them together.