I wanted to get some discussion going about the possibility of running the Kinect on the robot instead of on the DriverStation. I think this would open up some really cool possibilities for the robots on the field and off it.
So let's start with the obvious. The Kinect itself. It has a 640x480 RGB camera and a 640x480 depth camera. It has a motor to adjust up and down about 90 degrees total. It also has internal accelerometers. The cameras have a field of view 57 degrees by 43 degrees vertically.
This is a very cool piece of technology that I hope we can use to it's full potential. And I feel like using it as a control mechanism by the drivers just isn't right. Either FIRST isn't telling us everything (shocker) or this really just isn't that thought out. But let's ignore all that for a second.
First, is this even legal? Yes. From
http://www.usfirst.org/roboticsprograms/frc/kinect the question "Can I put the Kinect on my robot to detect other robots or field elements?" was asked and got this answer: "While the focus for Kinect in 2012 is at the operator level, as described above, there are no plans to prohibit teams from implementing the Kinect sensor on the robot." It seems like they're just leaving it open for those teams who are smart enough to figure out how. And that's the problem.
My first question is how to get it connected properly on the robot. First thing we need is power. It's USB so it should just be 5 Volts which won't be a problem. Next is connectivity. We need a USB host device unless anybody here wants to re-implement the USB protocol from scratch. And I'm wondering if any rule savvy people here know what kinda of things we can put on the robot. I was thinking it would be best to put something like an arduino or such on the robot that would handle all image manipulation or point detection and would send the rest of the results back to the DriverStation either by the Network or a Digital Input. Does anybody know if thats legal?
We will most likely have to write the USB communication stuff ourselves if we are to run on an embedded device. We can use the protocol documentation from
http://openkinect.org/wiki/Protocol_Documentation to figure out how to handle everything. If anybody has some knowledge in low level USB and Drivers it would be appreciated.
And lastly we need to be able to access this data in a timely fashion and react quickly. I don't know what's new this year with the DriverStation, but it would be cool if we could use the Kinect as our Camera. I think this is legal if we don't modify the DriverStation code. So the device would have to act as a IP camera that responds to the same commands as the current cameras. And it would have to communicate all data points that are needed for autonomous code.
I think this potential route was left here purpose so we could create some cool stuff with it. If anybody has any ideas, experience, or anything they think might be help contributions would be greatly appreciated.
Happy New Year,
Luke Young
Programming Manager, Team 2264