How fast should drivetrain odometry be updated?

Here’s a question: How many decimal places of PI do I need to calcualte the size of the observable universe to the accuracy of an atom?
1000? Sounds reasonable.
It’s not even 50.
It’s around 40.

Now, given that 50Hz is a really rough number, I feel like that was just a guess that got the programmer lucky when he ran the code.

If you really want to calculate the margin of error, there are multiple factors to be considered.

  • How far does the robot move during the competition
  • What kinds of turns does the robot make?

For example, you can get away with a very low rate for a straight path, but once you start turning around in squares, you get very inaccurate results.

Each time a sample is taken, there is a margin of error that a measurement device can have. How far it can deviate from the real estimate. Each time you take a measurement, that propogates throughout the entire “slice” of time since you generalize that slice of time.

It is like trying to find the area under the curve, as you take smaller slices, their errors become smaller and more localized.

So each time a snapshot is taken, the margin of error propogtes through that slice of time.

So if you have an error of +4mm, that is propogated throughout the 4 seconds if that was your slice of time. It was amplified by those 4 seconds.

So we can measure the positional error throughout time by meter seconds. This is called absement in physics, the longer something is far from the origin and the farther it is, the higher the absement.

A small error can be generalized over a longer amount of time to less consequence. A larger error cannot be afforded much time or else it will really skew the measurements.

As the slice of time decreases, the amount of error in general decreases.

So how do I implement this concept?

Let us assume that the margin of error provided by the manufacture has an equally likelihood of occuring. So an error of margin ±4 can produce +3 or -3 or +4 with each being equally likely.

So we can take the average deviation.

We come across a problem, if we take the average of -4 and +4, we get zero. So let us have the positive aspect be the higher bound while the negative aspect be the lower bound of our possible error. Since they both possess the same absolute value, they wil yield the same magnitude. (a magnitude of 4).

So let the error of margin for an odometer by ±10 meters. (That is really bad, probably can only be used for ad-hoc mountain measurements).

Since the error of + or -10 meters is assumed to have an even distribution, the average deviation at any given moment will be 5 meters in either direction.

If we calculate the upper bound, we can calculate the lower bound.

Let the slice of time be 1 second for simplicity.

What if the slice of time were 2 seconds?
Measurement of error(example)

We can convert.
5 meter seconds - for each second, there will be 5 meter seconds
5 meters * 2 seconds = 10 meter seconds.
That means if the slice of time were two seconds, the error would be twice as much.
What if the error was only 1 meter?
1 meter * 1 second = 1 meter second.
What if the time slice was .01 seconds?
1 meter * 0.01 seconds = 0.01 meter seconds

So the maximum error would be 1 meter second and the lower error would be
THEREFORE, BY DECREASING EITHER THE ERROR OR THE AMOUNT OF TIME THE ERROR LIVES, THE GENERAL ERROR CAN BE REDUCED

What is a meter second?

In physics, absement is a measurement of how far and how long something is displaced. A displacement of 10 meters for 1 second yields an absement of 10 meter seconds. A displacement of 5 meters for 2 seconds also yields an absement of 10 sections. It is a combination of how far and long something has been from a given point.

In this case, this combines the error and the time the error has existed in the calculation
Since a larger slice of time will allow an error to affect a larger portion of the calculation
And a larger error will have a larger effect on the calculation than a smaller errorr

General Formula

The cumulative error over a
It is IMPORTANT to note that the standard deviation fromm slice to slice decreaes as the slices become smaller.


I’m going to take a break, but the take
Disclaimer: I studied AP Statistics and did some reading on margin of error a few months ago, but most of this is me trying to derive my own method of calculating such within the last 45 minutes. This, to my limited knowledge, has not been derived by anyone else i.e. I have not looked up this method. Check the math, although from what I know, it does check. The standard deviate among the slices decrease

Too long didn’t read

Smaller slices will encourage better results, but eventually, the accuracy of the device will be the brick wall that can only be overcome by a better device

ToDo:
Explain slices per second and how the average error per slice could be multiplied by time to give the error over any period of time.
Converting Meter seconds to meters
But how to convert from meter seconds to a tangible error such as meters?

5 Likes