Quote:
Originally Posted by Jared341
The depth measurement (a time-of-flight LIDAR array cleverly built into the CMOS camera) is responsible for much of the capability of the Kinect (using depth to segment you from your surroundings is much faster and more robust than doing it with color/intensity via the RGB camera
|
I'd like to see how the IR time of flight measuring works in noisy environments (like say, outside or under stage lights). I'll have to get my hands on some hardware pretty soon.
Also, if the frame rate is decent enough, you may be able to spin this thing on a vertical axis for Velodyne type readings. We did this with single plane LIDARs (this project:
MIT CSAIL Autonomous Forklift), and the results were pretty good.
The cool thing about the depth sensor here is most CV algorithms (edge detectors, feature finders), should just work.