|
Re: CMUcam autonomous programming
I have thought that this would probably be the easiest way to tackle autonomous also. I have not cracked into the vision system yet, but the biggest problem I can fore see is that from either of the two side locations of the field, several tetra positions are very nearly inline. Taking into account the size of the tetras may help to determine which is further, but it still may be difficult.
My plan was to have the camera pan (or have the robot rotate) until the target was found. Then, you would compare the value for size and pan of the camera with a stored table of values for each tetra position finding the closest match. You would need a different table for each of the 3 starting positiosn the robot could have. (the left and right positions might share a table and just mirror the tetra positions). After the match has been selected, you would just run the correct dead reckoning code and get on your way.
__________________
Learn, edit, inspire: The FIRSTwiki.
Team 1257
2005 NYC Regional - 2nd seed, Xerox Creativity Award, Autodesk Visualization Award
2005 Chesapeake Regional - Engineering Inspiration Award
2004 Chesapeake Regional - Rookie Inspiration award
2004 NJ Regional - Team Spirit Award
|