View Single Post
  #13   Spotlight this post!  
Unread 11-11-2010, 22:21
Tom Bottiglieri Tom Bottiglieri is offline
Registered User
FRC #0254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Jan 2004
Rookie Year: 2003
Location: San Francisco, CA
Posts: 3,186
Tom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond reputeTom Bottiglieri has a reputation beyond repute
Re: M$ Kinect controlling robots!

Quote:
Originally Posted by Jared341 View Post
The depth measurement (a time-of-flight LIDAR array cleverly built into the CMOS camera) is responsible for much of the capability of the Kinect (using depth to segment you from your surroundings is much faster and more robust than doing it with color/intensity via the RGB camera
I'd like to see how the IR time of flight measuring works in noisy environments (like say, outside or under stage lights). I'll have to get my hands on some hardware pretty soon.

Also, if the frame rate is decent enough, you may be able to spin this thing on a vertical axis for Velodyne type readings. We did this with single plane LIDARs (this project: MIT CSAIL Autonomous Forklift), and the results were pretty good.

The cool thing about the depth sensor here is most CV algorithms (edge detectors, feature finders), should just work.
Reply With Quote