Quote:
Originally Posted by Jared341
The depth measurement (a time-of-flight LIDAR array cleverly built into the CMOS camera) is responsible for much of the capability of the Kinect
|
Did you find an explanation somewhere that lead you to believe the Kinetic uses IR time of flight? That approach is generally very expensive but has been done in the approach you are suggesting in the SwissRanger . . .
http://www.acroname.com/robotics/par...0-10M-ETH.html
However, given the price point of the Kinect, I am guessing they are using a triangulation approach similar to that described here:
http://en.wikipedia.org/wiki/3D_scanner#Triangulation
This approach is also often referred to as Structured Laser Light Ranging (try a google search - lots of research in this area).
In either case, I believe the FRC rules would have to change to allow the device to be used on the robot because it uses an "exposed laser". This rule is likely in place for eye safety reasons, but I would love the opportunity for our team to be able to use a ranging device like this on the robot. It opens up all kinds of advanced opportunities for robot intelligence and would put students in the thick of developing cutting edge software.