How do I understand standard deviation in the poseestimator class?

Hello! This past month I became very interested in the PoseEstimator class (specifically DifferentialDrivePoseEstimator) to find an accurate field position using both vision and the gyro. However, I don’t know how to find a fitting standard deviation for each of the standard deviation parameters in the class and I also don’t really understand the Matrix class’s use as a parameter in the class’s constructor as well. It would be really helpful if someone could help me out with this!

1 Like

Did you read this?

Do you know what “standard deviation” means, as a starting point?

I did, but I guess I’m still having trouble understanding how a standard deviation works. All I really know is that it’s the square root of the variance, and the variance has something to do with averages. Are there any useful resources that I could use to understand standard deviation better?

I just know some very scattered information about it, but I guess I don’t really understand it. I think maybe once I understand the concept better maybe this will be a bit easier?

The variance is the measure of the “spread” of a distribution. The standard deviation is also a measure of the “spread” of a distribution; the two are the different ways of looking at the same thing.

A distribution with a wider spread (e.g. a measurement with more noise) will have a bigger variance and standard deviation.

1 Like

That makes sense, thank you! But I’m still not really sure on how to find standard deviations for my state, local, and global measurements. I guess I don’t really understand what noise would look like for vision, the gyro, and the encoders. How would I know what “noise” would be and how could I use that to find the standard deviataion?

The WPILib system identification toolsuite can give you some rough estimates (you might need some care in interpreting them if the test scenario does not match the loading scenario in practice).

Learning and understanding the system equations that SysId is based on is very important - without a working understanding of that, none of this will be easy to use.

Ok, I guess I’ll do some research on SysID. Thank you for your help!

1 Like

Basically the standard deviation you supply for a vision update or for the odometry are an indicator of how much you trust the accuracy of that calculated position from either the odometry or the measurement system.

Lets say you drive the robot from a datum point of x=0,y=0,Theta=0 to a position of x=3.0m, y=3.0m and theta=90 degrees a hundred times. Every time it does this trip it will end up in a slightly different position. As you are doing this drive the odometry class is continually calculating your position (x,y,theta) as it drives along. Because of a bunch of factors like backlash, sampling delay/errors, carpet scrub etc etc it will never be in exactly the same position. If at the end of each of these drives you accurately measured the x distance, y distance and angle of the robot relative to your known starting location, you would observe a spread of errors in each one of these dimensions. If you plotted these errors you would notice that they form a gaussian (Bell Curve). There is a formula for calculating the standard deviation value from a set of samples. Google is your friend.
In layman’s terms 68% of the samples will fall in middle area of the bell curve out to one standard deviation from the average value, 95% will fall within 2 standard deviations and 99.7% will fall within 3 standard deviations. So if the SD are small this curve is very narrow and pointy, if the SD’s are large this curve is wider and flatter.
You have to supply the vision update with a set of SD values for each dimension. This is like a measure of how well you trust the vision measurement. You could also do a similar test with the vision system. ie Move the robot to a known position, let your vision system calculate its position. Do this 100 times and again 68% of the measured errors will fall between ±1 SD, 95% within ±2SD, etc etc. The reason all this stuff if here is because it is used be the pose estimator to help localise the robot and reduce its uncertainty.
As an example, if the robot had been driving for ages and not seen a landmark for a long time the position uncertainty from odometry alone would be very large. It could be up to meters out of position. So the bell curve of position has flatten out heaps and the position reported by odometry could be very wrong. If your robot sees a landmark that it recognises and it has the ability to calculate an estimate of its position from this landmark, your code can supply a vision update to the pose estimator. So the SD values that you supply with the update tell the pose estimator how much to trust this measurement. If SD values are very low it will likely update the robot position to very nearly match exactly where the vision system thinks the robot is. Every time one of these updates is supplied to the pose estimator the position and uncertainty will continue to approach what the vision system says. You might ask why do all this stuff, why don’t we just trust our sensors and reset our position every time we identify a target. There are few reasons, if your sensors aren’t that accurate, like say using an ultrasonic to measure a distance off a wall to calculate your position. You would not want to 100% trust this measurement because it could re-localise your robot to an incorrect position. Likewise if another robot was parked between you and the wall the distance reported by the sensor could be metres out and hence your calculated position will be meters out, so this is an example of a measurement that you would supply with very large SD values. This scenario can also happen when your vision system falsely identifies a landmark and supplies a position update that is completely wrong. Depending on those SD values that you supply with each update, depends on how much the pose estimator will adjust its position each time it gets an update. You have to supply SD for each dimension. Let say you have a camera facing upwards that uses a line detection algorithm from OpenCV to pickup the roof trusses. From this you would be able to fairly accurately supply a theta measurement, but your chances of knowing where you are in the x and y direction are pretty low from just looking at the roof trusses. In this update you would supply a very low SD for theta and very large SD for x and y.
By using this probabilistic approach you increase the robustness of your localisation system. A good book on this is Probabilistic Robotics. Pretty heavy reading but well worth it if you are interested in this field. This book explains many different localisation algorithms and also deals with measurement uncertainty that doesn’t fit into a bell curve.
Word of advice, If you are going to heavily utilise this odometry and pose estimation stuff in your robot code I’d have an option/command/button on your robot to force a reset of the odometry to a known position. I saw a top team on the weekend that somehow managed to get itself 90 degrees out to the field and their brilliant turret shooter that had auto targeted the goal continuously all competition all of sudden would not lock onto the target and they could not shoot all match.
Hope this helps.


So in order to arrive at good standard deviations for the input of the PoseEstimators you need to do multiple sample runs and determine error of each pose input (vision, wheel encoders…) Is that correct?

I am not sure how SysID factors into that.

SysId reports residuals from the model fit.

I am feeling a bit slow today I guess. If the standard deviation we are calculating is how different are the reported positions from reality how do residuals from sysid factor into that since sysid doesn’t know anything about actual position just odometry?

1 Like

It doesn’t; but encoder measurements are still quite accurate and can give you insight into the amount of system noise. You can calculate a standard deviation for the encoder measurements themselves from the encoder resolution (assuming no slip).

1 Like

Another less mathematically rigorous but faster method is to just pick some ballpark numbers and tune based on how the filter performs.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.