Quote:
Originally Posted by The Ginger
Hey CD, I was just wondering... how in the world does vision tracking work? my team has attempted primitive vision tracking in the 2016 season (our second year) but with no success. I am not asking for your code, which everyone seems to cling to like the one ring (however I wont turn it down), but the theory and components that make it tick. What are the best cameras, do you write code to recognize a specific pattern of pixels (which would blow my mind), or to pick up a specific voltage value that the camera uses as a quantification of the net light picked up by the camera's receiver. our team did well in 2016 with a solid shooter, I can only imagine how it would have done with some assisted aiming. Thank you all and good luck January 7th!
disclaimer: I just design and oversee final assembly, I am in no way a programmer, however our programmers will be taking a look at this
|
I'll do a quick outline for you, I'm working on something more in depth thing about it though.
1) Acquire Image (most any camera will work)
2) Filter Image to just the target color (HSV filter)
3) Identify contours (findCountours in openCV)
4) Eliminate extra contours (filter by aspect ratio or just size)
5) Find Centroid of contour and compute range and angle to target
6) Align robot