|
Re: Vision tracking: Did they get it right?
I've only looked at the C++ vision code so far, but I would imagine the labview code to be similar.
I am going to test it tomorrow with the old robot, but there are a few things that worry me:
- While the ellipse detection only uses luminance data for detecting edges, it does so by allocating memory, mixing the image down, processing it, then freeing the memory. I don't know how efficient vxworks' malloc is, but this seems like a rather bad idea.
- From what I can tell, the ellipse detection uses the edges of ellipses - meaning that it will detect two ellipses around the inner and outer edges of the black circle. While this is perfectly acceptable when one bases navigation of the center of the circles, it has to potential to throw a wrench into distance algorithms (e.g. inverse perspective transform). Some sort of algorithm will be needed to pick one of the edges (preferably the outer one).
- The tracking algorithm only samples the image once, then bases all further turning on the gyro without sampling any more images. There are both problems with this approach, as well as problems in the implementation. I won't elaborate on this point, as it probably deserves its own separate thread.
I'm impressed that they created a decently working camera example for teams to start with, though it definitely is not a perfect solution. I have to wonder if they did this on purpose - after all, it would be no fun if everyone's robot ran the same code.
|