Looking for something/someone to jump start us on how to take the retro reflective image of a quadrilateral we see from the camera into the distance and angles to the basket.
John Vriezen
Team 2530 Inconceivable
Mentor
Duluth
Looking for something/someone to jump start us on how to take the retro reflective image of a quadrilateral we see from the camera into the distance and angles to the basket.
John Vriezen
Team 2530 Inconceivable
Mentor
Duluth
These are all for distance.
Method 1: once you see identify which blob is actually the rectangle, you could do some empirical tests to map blob size in pixels to distance (a team I worked with in 2006 did this and it worked well, thought that was a solid target rather than a rectangle with a hole in the middle)
Method 2: Since the height of the retro-reflective blob won’t actually change that much depending on your robot’s view of it (until you get very close to the baskets), empirically develop a function mapping blob heights to distance
Method 3: Take method 1 and 2, and average their results.
Method 4: Use knowledge of the properties of the camera lens, and do methods 1 and 2 mathematically instead of empirically. This is harder, but more accurate.
There are lots of other ways, but those are the ones I can think of from the top of my head that I’m pretty sure will work. The hard part for each will be identifying which blob is the backboard, and making sure that a robot or other obstacle in front of you isn’t obscuring the blob or cutting it in two.
I posted something a few days ago that used the lengths of the opposite vertical sides to locate the robot on the field. The math is pretty simple, basically doing the same thing as the white paper for the vertical sides of the rect, and ultimately constructing a non-right triangle. You could use the law of cosines if you wanted to identify the angles within the triangle. You can also base if off of angles, I’m sure, but I suspect that will be less accurate due to lens distortion and coarse resolution.
As mentioned in the other post, I highly encourage you to build a backboard, attach some large number tape measures to it, perhaps yard sticks, and take images from various locations on the field. Perhaps mark the floor with tape and label them (A, B, C, etc.) so that you know the measurements. Keep notes of distance to target, location, etc. Then use Vision Assistant to work through some of the images by hand using the techniques listed above or from the white paper, and see how close you get. Once you get the hang of it, write code that runs over the same images, then move to the locations on the field with the live camera and see how well it works.
Finally, consider what you are trying to locate. Is it the center of the hoop, the center of the rectangle, or what. Once you know where you are, you may need some more math to actually aim.
The post from the other day is here … http://www.chiefdelphi.com/forums/showpost.php?p=1101535&postcount=18.
Greg McKaskle