How are they going to use Kinect?

The last question on http://www.usfirst.org/roboticsprograms/frc/kinect

Q: Can I put the Kinect on my robot to detect other robots or field elements?

While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot.

I don’t know why, but this makes me a little worried. While Kinect already has enough trouble as it is, will this benefit the game this year for the better or worse? Will it determining factor for the success of teams this year? How do you think they’ll incorporate it into the game this year?My guess is something with the human players, but other than that they’ve got me stumped :confused:

I think it’s just a cool option for teams to use. Nothing required, and if people don’t want to do it, then it won’t ruin the game for them. Fingers crossed.

Kinect on the robot is not going to be easy to use. Interference is a concern.

Not to mention interfacing it with the cRIO… That could prove to be difficult

I think it may be a challenge making the kinect talk with the bot… event he new CRIO II doest have USB IIRC (some of the non-frc cRIOs have usb though). sure, you could route the two data wires to existing I/O, but even that could get hairy.

The kinect, for now, is an interface with the driver station. The software and hardware will be tested during Beta, and we will give you all the information we discover.

For now, it has specific requirements - USB port, windows 7, etc. The Killer Bees programmer posted fairly extensive information in the discussion thread about it.

I would like to politely suggest that the conversation about it should remain in that thread rather than starting another one. The thread is here:

Or, you can ask questions on the FIRST Robotics Beta Test Forum. It may take a while to get direct answers, as software is just being received by the teams and hardware receipt is still TBD.

It really needs to be a human player device. Do you really think a sensor designed to set on a shelf can handle the shocks our FRC bots are subject to.

No, the game itself won’t involve it. Like they said, they’re focusing on operator control. The idea of using the camera on the robot seems like an afterthought.

(Assuming everything works as it is supposed to…)

Team 341 will be demonstrating our 2011 robot controlled via Kinect during the lunch break at Ramp Riot on November 12. Expect to see something that looks a little like an air traffic controller trying to land a tube onto the scoring rack…

We will record as well as webcast the results.

More details to come.

So is it for human player control, or driver control? Also, it’ll be optional, if it’s for the drivers, right?

I, nor any other beta tester, do not have any better an idea of what will be done with the Kinect in 2012 than you do at this point. I speculate that this will be used for a hybrid/auto mode, but that is not substantiated by anything other than intuition.

That sounds reasonble rather than trying to control the bot the whole match. I had originally figured it would be a human player thing when I saw the news.

I’m curious if the Kinect itself, or external software processing what the Kinect exposes at its APIs, will give more accurate results if the person/objects in its field of view have special shapes or colors.

Would gluing a white roll of toilet paper onto a trash can lid painted black create an easily discerned (better than a random human infront of a random background) target to track (depth and color)?

Would pink tennis balls stuck onto a person, who is standing in front of a flat green bedsheet, improve ones ability to track the person’s motions?

Blake
PS: I’m sure that the answers to my questions exist somewhere out in the Internet information stew. My hunch is the CD folks reading this thread will do the research for me and supply a nice summary answer (plus a few red herrings of misinformation that I’ll have to detect and filter out).

However, instead of thinking of myself as lazy right now, I’ll choose to consider my questions good mentoring that inspires students to do research. :wink:

There are two pieces of software you can modify that bring the Kinect data to your bot. There will be a “server” running on the driver station pc. The server talks to the Microsoft Kinect SDK/ for access to the sensor data. There will be a default build of this which pumps back some softball type of data to the bot (right now, it uses your arms as virtual joysticks). All indicators point to the source code being bundled with this, so you can modify as you wish. I haven’t had a chance to look at the source yet, so I can’t comment on how much of it is custom vs how much leverages MS’s APIs (its bundled in an MSI and seriously, who owns a Windows machine these days? :confused: )

You can also process parts of this on the cRIO. Once again, there will be some kind of basic out-of-the-box experience, but you are free to flip the switches and turn the knobs. You can receive all the skeletal data and build all of your detectors locally on the bot, if you wish.

So to answer your question, if you invested the time you could probably increase your rate of positive detection by doing something whacky. The SDK provided by Microsoft is free to download.

I ran the MSI installer under WINE on Ubuntu and the source code in included in the install directory. It looks like it creates two C# projects (KinectServer and UDPDump). I can’t really tell what UDPDump does (its a single CS file 36 lines long), and KinectServer looks like it wraps some MS code and does unit conversion and sending the data over a socket.

UDPDump just prints everything it sees on port 1155

Why not? That is what we do with the Bridges now. Take hardware designed to sit on a shelf and put it in the very demanding environment of a FRC robot.

We all know how well that works :rolleyes:

won’t this make the coach not being able to control the robot a borderline call? i mean, they would have to be in the field of view in most set ups

That’s what I was thinking, kind of like how auto mode ran in 2008, more human interaction.

http://www.usfirst.org/aboutus/pressroom/first-adds-kinect-for-2012
:wink: