Quote:
Originally Posted by Robby Unruh
Did any other team not use encoders/ultrasonic for their autonomous?
My team only had line sensors, we're counting on timers to do the rest of the work for us (arm).
And if so, how well did the timers work out for your team? Do you think we would have enough time to implement encoders on our robot in the pits?
|
Our team is using 3 front-mounted light sensors with great success. No encoders, no ultrasonic, no timers.
We're using C++. Our autonomous program manages a state machine that gets input from the light sensors and decides what to do based on them. Our algorithm is as follows:
Light sensor values (L/C/R)-----------------------State to execute
000--------------------------------------------------move() - go forward until you see the line
010--------------------------------------------------move() - this is ideal, we're on the line
100--------------------------------------------------correctLeft() - move left to see the line
110--------------------------------------------------also correctLeft()
001 and 011----------------------------------------correctRight()
101--------------------------------------------------at the fork; correctLeft() or correctRight() based on a switch on the robot
111-------------------------------------------------placeTube()
After executing placeTube(), the program quits the state machine and retreats back several feet, then turns on the spot (~180 degrees, but it's not perfect since we don't have encoders or a gyro). Generally, getting the tube on the rack takes about ten seconds, and we're only driving the motors at 40% for drive and 60% for turn.