View Single Post
  #25   Spotlight this post!  
Unread 06-01-2012, 08:26
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,080
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: Running the Kinect on the Robot.

Quote:
Originally Posted by Joe Johnson View Post
No not really. It returns the depth data, but not as an image. You can build an image out of the data, but there are a lot of reindeer games involved.
The openni Linux driver and C/C++ wrappers can do this for you pretty painlessly.

Quote:
Originally Posted by Joe Johnson View Post
By the way, I have been noodling on how would I find something interesting, say, I don't know, maybe the center of a ball of radius X and color Y.
As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk). For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.