![]() |
Microsoft Kinect as Guidance
Hey CD,
I want to preface this question by daying that I am not a programmer, nor electrical. I design drive trains and do PR. That being said, I have a question about using a Microsoft Kinetic as a robot's guidance system. I know the kinect's Distance sensor is an infrared dot matrix, so I was wondering if it would be possible to use this for obstical detection and avoidance for our robot. I think that this skill would be useful for showing real world applications to possible sponsors, and a croud pleaser at school events. Most industrialized modern robotics is moving towards partial, if not full automation, and having an obstical avoidance routine would allow our robot to demonstrate this. If any programmers or mentors know anything about this, please share. PS. I have also considered a Scanning LiDAR. |
Re: Microsoft Kinect as Guidance
Given sufficiently talented programmers, you could, but this task is probably about on par with computer vision in terms of difficulty.
Generally, the steps would be 1. read the data 2. convert it to xyz coordinates 3. figure out what the heck to do with that data (might be as simple as query for points within a particular box) Proficiency with 3d math would be very useful here. As would PCL (http://pointclouds.org/). |
Re: Microsoft Kinect as Guidance
I figured a discussion has occurred with using a camera (depth or not) to assist the driver for autonomous object detection.
Here is what a search found: http://www.chiefdelphi.com/forums/sh...d.php?t=118444 Keep in mind this was back in 2013. It doesn't look like anything came of this, sadly. I was hoping there would be source code, but there isn't. Keep in mind that dealing with so many points (640x480) can get very computaionally intensive very quickly depending on what you are doing. I would imagine the best game for collision detection would be in 2014, a big open flat field where every object is a game piece or another robot. 1. Seems like libfreenect is the way to go. 2. Through a google search it that there are only 2048 possible pixel values for the kinect v1, meaning that there are only 2048 possible depth values. The master equation according so some website is: (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); Then from that you have the vector magnitude from your lens to this point in space. You need to figure out the heading of said vector (think of the math to line up with the high goal), then decompose it into x, y, and z. 3. ??? 4. profit. I don't know what stage you are at or what knowledge you have on the matter. If you have anymore questions don't hesitate to ask. |
| All times are GMT -5. The time now is 06:33. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi