This is a model based off of Nvidia’s DetectNet Caffe model. The camera in front of the robot takes pictures at 5-10 HZ and the model on the Jetson infers the data, determines where the box is, and sends an “angle to box” value to the roborio which then turns the drivetrain to face the cube.
If you don’t have it already, maybe look into adding some form of non-max suppression. That would get rid of the overlapping detection boxes you have in some of the examples - assuming that’s not intentional for some reason.
Glad to see people advancing computer vision this year … the game design doesn’t make it easy.
We had multi detection until we only selected full cubes even if it was behind another. That way it didn’t learn part of a cube. We also posted some additional info on this thread: Jetson TX1 in Python: How to maximize effieciency with camera framerate etc? This game really seems a good time to try Neural Networks (fingers crossed)
Good luck to all the AI’s.
I don’t actually need to take care of overlapping boxes since I am just targeting the largest box in the image and aligning to that, but that is what I would do (and probably will do after the season) if I cared about multiple boxes in one frame.
Yes, but only for 2 matches. Our problem ended up being networktables issues on FMS so we dicided to remove our jetson to cut down on wieght given that communication wasnt working. I know that it works thogh and will fix it for our offseason bot.