Some ways to measure distance during autonomous

First off, I’m completely new to the forums, never posted here before, so tell me if I’m violating any rules or such. =)

My team’s finally getting to programming autonomous mode for the competition and the first question that came up was naturally: “how do we measure distance?” After brainstorming for a while we settled on two easiest (or seemingly easiest) ways of figuring out the distance our robot has traveled:

  1. Timer - we can determine the time it takes for our bot to accelerate to a certain speed and same for deceleration and plug the rest in the distance/time formula. This methods is perhaps the easiest, but seems very unreliable, mostly because the motors never spin up the same way, and this inconstancy should get the bot at least a foot off target.

  2. A distance measurer of some sort. By this I mean a device that literally rolls on the ground counting the rotations, which we can then multiply by circumference and figure out the distance. My only real problem with this method is the method of output - a problem that can be overcome.

I was wondering if any of you could pin point any flaws in my logic here or recommend a better way of measuring distance that the bot’s traveled.

Also, feel free to discuss any means of measuring distance in this thread. :wink:

Eldar (Sum of all Forces - ΣF - Team #1888)

You have it figured out pretty well. Your basic options are open-loop (apply voltage X for Y seconds and hope that it works) or closed-loop (use a feedback device to monitor your progress towards a goal and adjust output based on that progress).

While time-based dead reckoning is by far the easiest path, you are right in that factors such as battery voltage, tractions differences, and unplanned robot interaction will muck it up.

Using a shaft encoder or gear-tooth sensor (provided in the kit) can give you a better idea of the distance travelled, but even these cannot help much in the case of wheel slip (especially when turning). You can add a gyro to help measure angular position, but things quickly get more complicated. Plus, even the best dead-reckoning feedback algorithm is going to have holes - you can’t account for every possibility on the field.

Using an adaptive approach, with the camera being able to monitor an external known position (i.e. the target) is arguably the most precise method.

Still, your choice all depends on what you want to do in autonomous - if you’re going to shoot for the top with the camera, then you might not need any sensors besides the camera: just drive for a second or two, then let the camera lock on and aim. If you’re going for a side goal, you can pretty much do the same: drive straight, hugging the wall (you can have your robot turn slightly into it to stay against it), then after a few seconds, jettison your balls.

The more complicated and precise the task, the more a feedback mechanism (encoders, gyros, cameras) will help.

Going by time alone is known as “dead reckoning” and is the choice of last resort. Many uncertainties will keep you from being very precise or repeatable.

Using a “follower wheel” is a good idea. As long as you can keep it in contact with the ground and avoid problems with sideways travel, it’ll tell you exactly how far you’ve gone without regard for slipping drive wheels.

The other two typical options are encoders or gear tooth sensors in the drivetrain, and optical sensing of reflective stripes on the wheel hubs. Both are relatively easy and effective. Kevin Watson makes available code for using encoders; just drop it in and it works.