987 Implementation of Kinect on robot

I know personally, I would like to know how 987 implemented the Kinect on their robot. From what I’ve seen it’s far superior to the traditional camera. I wanted to know how it was connected, where the image processing was done, etc. As an alumnus wanting to give back to the teams in my home city, I think this would be a cool summer project for some team members and myself.

Several other teams have inquired about this as well. We have already discussed putting together a tutorial/white paper on how we implemeted it this year, so stay tuned…

Thanks, big fan of your guy’s robot. Amazing job all around. Watched all the Curie eliminations and you guys just kept putting on a show. Again, great job on the robot.

I dunno exactly how 987 implimented it, but at one time our team had a kinect physically on our robot. The set up we had was the Kinect hooked up to an onboard Atom computer, with said computer running a third party software called RoboRealm. Our team was able to work with the creator of the software for this specific FRC season, and he helped up create a program to use the Kinect for distance and angle calculation. We were able to use the IR camera, plus a pleathora of SuperBright LEDS to figure distances to within a few inches.

GRT Used a kinect this year. Unfortunately, due to comm issues at SVR we were only able to use it once. Afterwards we switched to manual control from an Axiss camera.

I believe the Kinect was hooked up to an onboard PC that talked to the cRIO. We found that the distance sensing was not a huge issue, and had a set speed that worked for the key. The Kinect handled panning of our turret to aim.

639 was the ‘other team’ at champs with a Kinect on our robot. We created a manual about how we used it; would you guys be interested in seeing it posted?

Please post! I think now that we’ve seen the power of the kinect on a robot, teams will want to learn as much about it as possible.

I’ll see if I can get a manual or white paper put together about how we had ours working out.

I talked to someone from 987, and they used a Pandaboard, a small, on-board computer. The Kinect plugged into that, which did the image processing and sent the necessary data to the CRio. I’ll be keeping an eye out for the details from them.

They had a second system setup in the pit. They used a Pandaboard on the robot for the processing, which my programmer had already been lobbying for us to buy. After 987 graciously spent a fair bit of time talking with him we hope to get some vision implemented before an offseason event.

Wetzel

I’ll be curious to see how this was done. We talked about it and did a lot of research into how Kinect works. It’s fascinating the way it projects a matrix of dots on to the surfaces in the room.

Here is an interesting article on it:
http://electronicdesign.com/content/topic/how-microsoft-s-primesense-based-kinect-really-works/catpath/embedded

If you have an old video camera with a night-shot mode on it, you can clearly see the dot patterns in a darkened room.

The thing that worried us was the possibility that other robots were doing the same thing on the field and the IR dots would interfere. From our understanding of how the Kinect works, two Kinects would mess each other up pointing at the same surfaces. This didn’t turn out to be that much of a concern because virtually no one we saw used Kinect on the robot except 987 (and 639…sorry). But I would be curious to know if they had the same concerns or just assumed they’d be OK because they predicted no one else would pull it off.

We’ll definitely put something together to help teams who want to look into this. Its going to take a little while because there were a lot of steps to getting it working. There were several pitfalls that we encountered but working with a “3D point cloud” from your sensor is pretty darn cool. We could accurately tell the range, heading, and “bank angle” of the backboard.

One other thing, I think this year the work that 341 (Miss Daisy) did was the best in the world and it didn’t need an extra computer on the robot! Definitely check out the information they already posted; its probably a better roadmap for success.

our team is going to try to get the kinnect to work as a sensor during our off season. it would be awesome to see how you guys made it work

Our team also had a Kinect on-board, and like 2410, we used RoboRealm with an on-board computer. It works really well; tracking distance is accurate to within 1/4". The best thing about using the Kinect is the IR camera, and built-in IR projector. We have just as powerful a light as some other teams, and don’t get blinded when we look at the robot head-on :).

I’ll get our white paper submitted on CD-Media soon, another team member has it.

Here’s the white paper describing our usage of the Kinect on our robot with the RoboRealm software:

http://www.chiefdelphi.com/media/papers/2692

-Chris

But the light faces the opposing alliance when you shoot!

We tried to use the Kinect on our robot this year, with less than pleasing results. I believe we had issues with the power not always functioning properly, and sometimes the IR wouldn’t work. I don’t have all the details, but it was enough of a problem, along with a lot of comm issues associated with the on-board computer, that we went to an axis camera for champs.