Quote:
Originally Posted by mattiej
From what I understand (I haven't had much time to look it up) yes.The Kinect uses a combination of both IR and camera. It's an infrared receiver that registers near infrared light to not only get position, but also color and texture.
|
The Kinect uses a depth finding approach called structured light. An IR laser is projected out as a pattern of dots (believed to be done with some type of diffraction grating). The scene is then viewed with an IR sensitive camera sensor. The method believed to be used to compute the depth from this image is a comparison with a stored reference pattern. Because of the spacing between the projector and camera changes in depth will shift the dots in the image relative to the reference pattern.
The RGB camera on the Kinect is seperate from the depth system and it is possible to stream data from one or the other or both. So far no method has been discovered to receive aligned RGB and depth images from the Kinect. All alignment being implemented is being done on the computer side, most commonly by calibrating using an approach similar to the standard checkerboard approach for camera calibration.