Quote:
Originally Posted by Jared341
<snip>
As long as Y = "a distinct color not found/illegal on robots", you could probably do this pretty well without even using the Kinect's depth image. (OpenCV has built in hough circle routines, for example: http://www.youtube.com/watch?v=IeLeMBU4yJk).
For added robustness, you could use the Kinect depth image simply to help select the range of radii to look for. I think you'd get equivalent performance - and much more efficient computation - using this method than with 3D point cloud fitting.
|
First regarding "As long as Y = "a distinct color not found/illegal on robots"" This is a pretty significant as long as.
Second, regarding using standard image processing, my experience with machine vision is that with controlled lighting, life is good, without it, life can be pretty crumby.
An FRC Robotics field is a pretty lousy lighting environment -- may be bright, may be dim, may be spots, may be colored lighting, ...
There were teams in the GA dome whose image processing algorithm ran fine during the day, but had fits after dark (and vice versa). Are you willing to live with the possibility that your algorithm runs fine on your division field but goes whacky on Einstein? Maybe but maybe not...
So... ...I think that the 3D points from the PrimeSense distance data are going to be more robust to ambient lighting conditions.
Joe J.