CMUcam autonomous programming

Hello peoples!!

Recently, my team and I have been taking a look at the vision system that FIRST provided for us, and we found that the tracking is some what buggy, but it works. I thought about having a dead reckoning system that the camera would just take a picture while on the field for the first 48ms, then going into the controller, and grab the coordinates and match them to the picture. For example, the camera takes a picture of the field and finds a vision tetra positioned at #2, the camera would then go into the controller and then find the previously defined coordinates for tetra #2, and then use a predefined route to drive to the tetra, pick it up, then cap the tetra.

Would anyone like to lead me in to the right direction to start on this system? Thanks.

I have thought that this would probably be the easiest way to tackle autonomous also. I have not cracked into the vision system yet, but the biggest problem I can fore see is that from either of the two side locations of the field, several tetra positions are very nearly inline. Taking into account the size of the tetras may help to determine which is further, but it still may be difficult.

My plan was to have the camera pan (or have the robot rotate) until the target was found. Then, you would compare the value for size and pan of the camera with a stored table of values for each tetra position finding the closest match. You would need a different table for each of the 3 starting positiosn the robot could have. (the left and right positions might share a table and just mirror the tetra positions). After the match has been selected, you would just run the correct dead reckoning code and get on your way.