The depth measurement (a time-of-flight LIDAR array cleverly built into the CMOS camera) is responsible for much of the capability of the Kinect (using depth to segment you from your surroundings is much faster and more robust than doing it with color/intensity via the RGB camera - otherwise Kinect would fail if you were wearing a white shirt in a white room, for example).
I, personally, am very excited by the commercialization of low-cost depth imagers (Kinect is the most conspicuous example, but several companies and universities around the world are making headway as well). Active depth sensors were the reason why many DARPA Grand and Urban Challenge vehicles were able to succeed - but sensors like the ubiquitous Velodyne HDLs used by many teams run about $75,000:
Sure, Kinect’s sensor doesn’t offer nearly the range, accuracy, or resolution - but for $150, it certainly would be more than good enough on a FIRST robot!