Time-Based Autonomous

Hello everyone,

Earlier this year, my team tried to create an autonomous program where the robot would drive forward for a certain amount of time before braking and turning to the lift peg. We already knew that we would have to compensate for the variable voltage, so we came up with this solution. However, even with that compensation, the program still never worked accurately. So, my main question is, how have other teams implemented time-based autonomous programs successfully?


Jonathan Daniel

P.S. We don’t plan on running a time-based autonomous as our main program, we would just like to have it as a backup in-case our encoders fail.

We don’t. Time is a variable that can be broken by other events. We use wheel encoders to measure the distance our wheels have traveled. So we just tell the robot to move forward a certain amount of distance. Much more reliable than time based movement.

If you’re worried about encoders failing do what we did. In the beginning of the year, we took the average of both our wheel encoders to determine distance traveled. We had an issue of an encoder failing during one match which messed up our distance. We now only take the distance traveled from the encoder that has the highest value.

I was able to successfully implement a dead reckoned auto that was effective in 2015 to lift a yellow tote into the correct zone and a 10 point crossing auto in 2016. Side gear auto for 2017 was a different can of worms and I can not recommend dead reckoning. If you insist, make sure you have an acute replica of the field and your bot. Then it’s just trial and error so you can learn the tendencies of your auto. Do your best to put measures in to keep it consistent. Then with a fair amount of luck your auto will work!

Instead of relying on just time, we relied on other sensors (vision tracking, gyro, distance) to give our robot an idea of where it is. It worked pretty well in competition, and we actually had a side gear working with no encoders.

We’ve tried it, but it can’t be consistent. There are too many outside factors that can influence the robot’s movement, including different wheel wear, extra friction, and battery voltage. Compensation for battery voltage can help, but it can’t get the motor turning the exact same way every time.

If you don’t need to do something very precisely, then it works fine. For example - crossing a defense last year, or crossing the baseline this year. However for a precision task like side gear autos, you need some sort of feedback device / devices.

We have timeouts on all our movements. We also have a time-based autonomous which crosses the baseline - this is the fallback in case no auto is selected, or encoders are broken, etc.

It’s really easy to do timeouts with Command Based. I highly recommend it - it saved our autonomous routine multiple times.

Our team does a similar technique for autonomous. We have a fault detection and accommodation tree that allows the robot to use as many sensors as are functional. This year we would detect if both encoders were giving a similar value and we able to ignore and encoder I’d it was far away from the predicted. Almost like the feed forward term in a PIDf loop. If both encoders could not be trusted we would implement a time backup with battery voltage compensation. At the end of the loop it would check if it had vision targets and deliver the gear if it had a successful lock on the targets. If it did not understand or get a correct target it would dead reckon using timing and gyro values to get as close to the peg as possible but not deliver the gear. It would hold it for delivery it teleop.

Overall dead reckoning should be a backup but with voltage compensation and careful calibration of the drive train it can have surprising accuracy and save an auto when sensors fail.

Our encoders failed this year so we mostly did time-based center auto and blue alliance listed us as having the highest autonomous points in our first event, so sometimes you can get lucky and have it worth doing. Having encoders would have been preferable though.

Team 74 also used encoders and vision for our autonomous modes this year. It’s harder than doing a time based auton, and a little more daunting (vision in particular) if your team doesn’t have experience with it, but it’s definitely worth checking out, possibly as an off-season project.

We implemented a timeout function during the turn portion of our auto routine. If the robot didn’t finish the turn (target +/- 0.2 degrees) within N seconds, then the turn routine would finish and the next portion of the auto routine (drive straight/kick gear) would begin. After kicking the gear, the robot would then travel down field - so timing out on the turn was important to us. Most of the time the lack of a finished turn would cause the gear to miss, but a couple of times it made it on.

The timeout was a good indication that the turning PID needed to be adjusted.

Having a ground intake was key to why we decided this was a good idea.

Time based autos can work for simple movements. For example, this year’s center-peg auto could be time based. If you were a passive gearbot you could just tell the robot to drive forward for the entire autonomous period. It’ll reach the airship and keep trying to drive into it; meanwhile, your pilot fishes the gear out. If you had an active gear handler you could still drive forward for 5 seconds, stop, release the gear, drive backward for 2 seconds, stop. Just choose a speed that guarantees you’ll hit the wall (and then some).

You could do the same for 2016’s defense crossings, or 2015’s “reach the auto zone”. In other words, any autonomous mode where you basically have to drive straight forward.

Depends on how straight you need to be. I definitely saw teams have trouble this year with feedfoward center peg autos. Then again, I also saw teams who were successful with it, so it probably mostly depends on the robot.

What I’m curious about is how much improvement (if any) people see from using motion profiling, when just using feed forward. Does anyone have data on that?

As a rookie in 2016 we only had time-based auto, basically because we didn’t know how easy it would be to add an encoder / gryo to the bot.

It was… well… rough.

Especially with Stronghold, the effect that traction (or lack thereof) had going over some defenses, it was a crapshoot as to how far we went.

The other problem we had is our robot that year developed a not insignificant pull to the left just before bag-n-tag. At one point we had compensated for it in auto by reducing right-side power by over 10%.

Two words sum up that experience: Never Again

If you’re a kit bot, you can grab encoders from AM fairly cheaply (get the cable to plug into the RoboRIO too), and the FC Gyro does more than an adequate job.

Learning a basic PID control system (both for left/right drive and gyro) will take some effort - but there are a lot of resources available.

I can’t stress it enough.

In 2015, aside from the turning, which used a PID with a gyroscope, we were able to dead reckon a 3 tote autonomous that worked every single elimination match, through the finals.

I would not recommend this at all, however.

In general, before you need to implement spline driving for more complex movements, there are two basic movements that can be done with encoders and a gyroscope. These can actually be very effective when implemented properly, but not as good as a spline drive. Those two basic movements are a drive straight and a turn.

The drive straight command can be very successful for driving a distance, straight, of course. To determine the distance traveled for a drive forward command, average the encoder ticks. But, also check to make sure that you have your encoders connected, and change your autonomous program if any are missing. Also, as a requirement, a good drive straight command will have a target gyroscope heading.

The turn command is a bit more flexible. You can simply use a PID, but this can have varying results due to variable sticktion on the carpet. You could also use the encoders to turn the left/right sides of the robot a certain distance, based on a kinematic model of the robot, to achieve a desired turn. Team 254 used this method this year to line up their robot to fire into the boiler. The Talon SRX’s Motion Magic control mode actually makes this process a breeze. If a turn is followed by a drive forward, be sure that the desired heading on the drive forward is the angle that you just turned to. In this way, the robot is actively trying to stay lined up even when driving straight forward.

My team used an encoder and gyroscope based drive forward this year, which was very successful in autonomous. We used a simple turn PID, however, which had consistency problems. We still were able to achieve a roughly 50% success rate on a side gear autonomous, and at least 90% on the center gear autonomous.

If we were to use a better turning command, I believe the inconsistency on the side gear autonomous would be much improved, as the only irregular movement in the autonomous was the turn.

These are just general suggestions for turning/driving straight, which can be used to create very functional programs. However, the greatest success in autonomous generally comes from motion profiled splines, which are much more complex.

Good luck!


How did y’all compensate for the voltage? Did y’all multiply a certain ratio by the speed or time of travel?

This was not our experience, so let me provide a bit of counterpoint:

We used the motion profiling on the Talon SRX combined with wheel paths generated by Jaci’s Pathfinder. We had to empirically measure a few values that couldn’t be accurately determined otherwise, namely the rolling diameter of the wheels (calculated by running a profile and measuring how far the robot actually travels) and the “effective wheelbase” of the robot (measured by turning in place with a different profile and measuring the angular change), but after properly tuning it was quite reliable at traveling to an arbitrary point on the field.

The only “trial and error” needed was in accounting for inaccuracies in field dimensions, which a good pre-match alignment system successfully solved.