|
Re: Vision Assistance
The traditional method is to define ranges for color values that match your target, then use algorithms such as convex hull to pick out the objects. We did that last season with LabVIEW, and it worked pretty reliably. It's not perfect, however. It took a good 5 minutes to get the values zeroed in for each competition. Also, by the time our bot made it to the finals, either the axis camera re-calibrated itself, or the lighting had changed just enough to render the color ranges ineffective.
Over the off season, we bought a Raspberry Pi and Camera to try with an alternative algorithm I had come up with (using Python and OpenCV). Rather than using static color values to isolate the retroreflective tape, we tried taking two pictures - one with the tape lit and one unlit - and subtract the images to get the target.
When held rock-steady, the algorithm worked beautifully. The trouble came when you tried to move your targets. An ever-so-slight shift in the image between pictures caused all sorts of chaos. If I were to do it again, I would probably do a hybrid of the two methods.
What processes have you tried?
|