Quote:
Originally Posted by synth3tk
As Vikesrock mentioned, this would be akin to buying a standard Logitech webcam and writing recognition software for that. There's nothing special per-se about having the driver with the intent of using it in the same way it's used for it's purpose on the 360.
As for using this on the robot, that's an interesting idea...
|
The depth measurement (a time-of-flight LIDAR array cleverly built into the CMOS camera) is responsible for much of the capability of the Kinect (using depth to segment you from your surroundings is much faster and more robust than doing it with color/intensity via the RGB camera - otherwise Kinect would fail if you were wearing a white shirt in a white room, for example).
I, personally, am very excited by the commercialization of low-cost depth imagers (Kinect is the most conspicuous example, but several companies and universities around the world are making headway as well). Active depth sensors were the reason why many DARPA Grand and Urban Challenge vehicles were able to succeed - but sensors like the ubiquitous Velodyne HDLs used by many teams run about $75,000:
Sure, Kinect's sensor doesn't offer nearly the range, accuracy, or resolution - but for $150, it certainly would be more than good enough on a FIRST robot!