As most of us may know, there have been many recent Kinect hacks, including spacial recognition, 3D mapping and, this just in, autonomous obstacle avoidance while in flight. It’s only $150 which is well within most teams budgets and many of these could have been applied to recent FIRST contests. What are your thoughts?
There was a thread about this not long ago that you should check out: http://www.chiefdelphi.com/forums/showthread.php?t=87480&highlight=kinect
As a roboticist I am extremely excited by the Kinect - it finally has broken down the cost barrier to (reasonably) high resolution and fidelity ranging, as the videos that we are starting to see on YouTube with quad rotoros and the like are starting to show.
As a FIRSTer, I doubt that we’ll see a Kinect on a robot this year (there is a nontrivial interfacing problem that needs to be solved - probably by using a separate onboard PC that meets cost and power rules - before we can use them with the cRIO). Moreover, there haven’t been a lot of FRC games where I could think of high value uses for a Kinect. But I hope someone does it, and I hope it ends up being as cool as I would imagine!
One possible use of a Kinect on a mobile robot: SLAM
Even simplistically, it opens up a world of opportunities for proprioceptive and exteroceptive sensors. Imagine a game that had two different color, same-sized objects, such as the balls in the 2008 Overdrive game. The Kinect could sense a round object, determine the color, and ascertain how far away it is.
This could be used to design “smarter” robots that employ a greater level of autonomy. Imagine driving a robot near a ball in 2008, and pushing an “Acquire Ball” button on the operator interface, and have the robot automatically sense the right ball, drive up to it, and pick it up with your manipulator.
And dollar for dollar, the Kinect is the same price as the Axis ethernet cameras we already use on the robots, but provides a lot more functionality. The only downside at the moment is interfacing, but a breakout board could be made to interface the Kinect with a cRIO-supported communication protocol (ethernet, serial, etc).
It won’t make much of an impact. I do not see why the Kinect is the hot new craze these days. Yes, it is the first of its kind to be marketed to the masses, but this community is not the everyday consumer. I believe that only the most dedicated teams would be able to pull off any imaging with the Kinect or any other mechanisms. I personally am staying away from Kinect, like I mentions previously. Well good luck to all teams that will be going the Kinect route, but I personally believe that is not the right way to go in the context of this competition.
Just remember to innovate, and not imitate.
If you would really want to know why I am thinking this, just PM me.
Using a kinnect for advanced perception would be extremely innovative. I’d almost say it’s a lock for winning awards at champs.
I agree with Adam. I just hope that those gigawatt rock’n’roll lights don’t interfere with it.
That is the reason why I changed my strategy for image processing. If you read my posts from before summer, I was suggesting a “projection” with a known image to be projected onto the field then process the image taken with a camera. I knew 1. it would probably be illegal and 2. the super mega lights will probably mess with the image projection. Something with the laser projection. That is when I “came up” with the rotating cameras idea. I honestly think that it would work just great as long as I get going with it. If it doesn’t work, then I can do the stationary camera way. May be I can even use 3 cameras… who knows. My mind is just full of ideas I just need to put it into effect.
EDIT: Quick question, is the Kinect just using a IR led to shine up the place like a security camera does and use the intensity from the reflected photons to get the approximate range?
From what I understand (I haven’t had much time to look it up) yes.The Kinect uses a combination of both IR and camera. It’s an infrared receiver that registers near infrared light to not only get position, but also color and texture.
There are some sharp programmers out there. I’m sure they could figure out compatibility and all that programming lingo that I have no clue about. I just thought I’d ask everyone’s opinion since my friend and I, both ex-FIRST team members, saw a video of someone who essentially turned kinect into a helicopter and had it fly autonomously and thought that it could be incredibly helpful with several things in FRC. I reckon there will be a maximum of 3 robots that implement Kinect in this years competition
I agree that’s it’s awesome, unfortunately thus far in FIRST based on previous games, there is little point value incentive to do it. The autonomous tasks it could have solved were solved easier with less resources. Doesn’t stop a team from doing it because it is cool however, or if the autonomous is something pretty different from the past.
“Steal from the best, and invent the rest” - I’m not advocating just duplicating others ideas, but there is nothing wrong with looking at how someone else attacked a problem and using their technique/results to help you.
I think integrating a Kinect onto a FIRST robot would be highly innovative. I’ve yet to see anyone do it (yet) and based on some of the hacks I’ve seen with the Kinect, it has the potential to be very useful. Considering the issues the vision system has had over the years since it first came to fruition, it seems like a worthy endeavor to me.
-Brando
The Kinect uses a depth finding approach called structured light. An IR laser is projected out as a pattern of dots (believed to be done with some type of diffraction grating). The scene is then viewed with an IR sensitive camera sensor. The method believed to be used to compute the depth from this image is a comparison with a stored reference pattern. Because of the spacing between the projector and camera changes in depth will shift the dots in the image relative to the reference pattern.
The RGB camera on the Kinect is seperate from the depth system and it is possible to stream data from one or the other or both. So far no method has been discovered to receive aligned RGB and depth images from the Kinect. All alignment being implemented is being done on the computer side, most commonly by calibrating using an approach similar to the standard checkerboard approach for camera calibration.