pic: A little programming excercise



As a teaching device, I had the vision team of 1706 do the following task: track an undefined amount of stacked of yellow totes. While this scenario is impossible, it was a great learning and teaching point and they learned a lot.

We have 3 vision programs this year:
One utilizes depth which tracks every game piece except litter
One utilizes IR which does this
One uses color to track the short side of the yellow totes.

Code will be opensourced soon.

What depth camera do you use?

It’s configured for the Microsoft kinect but can be easily adjusted for any other one, such as the asus xtion. The only thing that would have to be changed is the distance calculation because it’s different for every depth camera you use.

That’s going to be impressive once 20+ totes in varying sized stacks are all over the field! Should make navigating between scoring platforms a bit smoother at end-game.

That’s the idea, kind of. It’s mainly going to be used for autonomous piece pick up and stacking it to my understanding. It could be used for that, sure. We have generated paths before based on the what the depth map sees. Then you could use the poofs code to drive on that curve if you’re daring.

I think I accounted for everything. Depth returns distance, x rotation to the center of the object, how many degrees the tote is offset if it is a tote, how many totes high the stack is, if it is a yellow or gray tote or green bin, and soon if it has a bin on it. If there is something I didn’t account for, please let me know and I’ll see if it I can incorporate it.