Deep Learning Neural Network detects Power Cubes

Here are some pictures from a neural network model I trained with Caffe to detect Power Cubes for this year’s game:

https://i.imgur.com/1ETT2Bl.png https://i.imgur.com/3T2z1rr.png https://i.imgur.com/89fETol.png https://i.imgur.com/sDRI91F.png

The dataset I used for training consists of about 1000 pictures and labels, with data collected by both my team and Team 100 (Thanks for the data!)

I thought I would share this as I’m very happy to have completed and optimized this and hope to use it at the upcoming Sacramento Regional and SVR the week after. See you all at competition!

Nicely done! What kind of neural network are you using? Does it process video or just still images?

This is a model based off of Nvidia’s DetectNet Caffe model. The camera in front of the robot takes pictures at 5-10 HZ and the model on the Jetson infers the data, determines where the box is, and sends an “angle to box” value to the roborio which then turns the drivetrain to face the cube.

Very cool results.

If you don’t have it already, maybe look into adding some form of non-max suppression. That would get rid of the overlapping detection boxes you have in some of the examples - assuming that’s not intentional for some reason.

Glad to see people advancing computer vision this year … the game design doesn’t make it easy.

We have done something very similar to this as well! May I ask what programming language you guys used for the inference program on the Jetson?

Inferencing is done in python.

Is this dataset available for others to look at?

We had multi detection until we only selected full cubes even if it was behind another. That way it didn’t learn part of a cube. We also posted some additional info on this thread: Jetson TX1 in Python: How to maximize effieciency with camera framerate etc? This game really seems a good time to try Neural Networks (fingers crossed)
Good luck to all the AI’s.

Yes it is! You can see the dataset here.

Looks pretty good so far. Are you planning on using NMS at all to help with the overlapping boxes. Simple way Would be to iterate over all of the ROIs and merge those with significant overlap.

I don’t actually need to take care of overlapping boxes since I am just targeting the largest box in the image and aligning to that, but that is what I would do (and probably will do after the season) if I cared about multiple boxes in one frame.

That’s a really exciting development. Great work on it so far; can’t wait to hear how it fares at the competition!

UPDATE:

Here is a video of it in action on the Jetson:

I added overlap detection like a few of you suggested and it made it a lot smoother looking :slight_smile:

That looks really impressive!
Looking forward to seeing this in action in Sac.

Did you end up using this in competition?

I’m interested to see how this worked out when you had a bunch of cubes clumped together (say, the pyramid). Was it able to pick out individual cubes or did it recognize the whole blob as one?

With an appropriately designed intake, would this even matter?

If you don’t want to knock over the stack, it probably would.

Cool project! Do you plan to release source code in the near future? Also, how was tracking incorporated into your robot (how did you use it in competition)?

Yes, but only for 2 matches. Our problem ended up being networktables issues on FMS so we dicided to remove our jetson to cut down on wieght given that communication wasnt working. I know that it works thogh and will fix it for our offseason bot.