View Single Post
  #2   Spotlight this post!  
Unread 06-01-2015, 21:33
x86_4819's Avatar
x86_4819 x86_4819 is offline
computer-whisperer
AKA: Christian Balcom
FRC #4819 (Flat Mountain Mechanics)
Team Role: Programmer
 
Join Date: Sep 2014
Rookie Year: 2013
Location: Shepherd MI
Posts: 92
x86_4819 is on a distinguished road
Re: Vision Assistance

The traditional method is to define ranges for color values that match your target, then use algorithms such as convex hull to pick out the objects. We did that last season with LabVIEW, and it worked pretty reliably. It's not perfect, however. It took a good 5 minutes to get the values zeroed in for each competition. Also, by the time our bot made it to the finals, either the axis camera re-calibrated itself, or the lighting had changed just enough to render the color ranges ineffective.

Over the off season, we bought a Raspberry Pi and Camera to try with an alternative algorithm I had come up with (using Python and OpenCV). Rather than using static color values to isolate the retroreflective tape, we tried taking two pictures - one with the tape lit and one unlit - and subtract the images to get the target.

When held rock-steady, the algorithm worked beautifully. The trouble came when you tried to move your targets. An ever-so-slight shift in the image between pictures caused all sorts of chaos. If I were to do it again, I would probably do a hybrid of the two methods.

What processes have you tried?
__________________




My Github Repositories
Reply With Quote