I've been curious about this.
I have a kinect at home, and a Raspberry Pi, and have been delayed on testing how fast a 700mhz arm processor with 512M RAM can process info, and use it for determination.
I imagine if you had one pi to process and stream the video to the other pi doing the determinations, it could work.
This is the resource I have been looking in to.
http://openkinect.org/wiki/Main_Page
There would be some hand-compiling going on, I'm certain, so you'd want to be prepared for that. This may be an off-season project to see how feasible it is with the limited power of the Pi.
On to the Kinect:
The Kinect has a built in IR light (Which I think we all know, I'm just being reiterative here), and IR camera, and a color camera.
The camera is designed to work in depth and image recognition, so it wouldn't be less accurate, it would probably be a perfect use tool for robotics.
The IR camera picks up the IR light, and as the IR light fades, the kinect translates to depth of field/distance from camera. You could get a rough idea of how far you are away from an object (All though, I think a Range finder would still want to be included if you wanted to gauge distance on your bot). That and the color camera work together for determination and object recognition.
Just remember, it's more than likely going to tax the cycles on the Pi something fierce.
I hope this information helps!
Thanks,
D