I have a few estimated robot poses from the cameras, labeled p1, p2, p3…, and I want to derive the standard deviation of the vision measurement instead of using a constant.
I can use the statistic formula to get the standard deviation for translation ( x and y, in meters). But for rotation, there is a wrapping issue. For example, if the measured rotations are {-179deg, 179deg, 179deg}, the standard deviation derived by math is pretty high, but the measurements are very close to each other. So, how can I calculate the standard deviation for rotation values?
In general, I would reject rotation values from cameras entirely and trust only the rotation from the gyro. If possible just use the XY pose and ignore the gyro reading. This is because the onboard gyro of the robot is likely to be far more accurate and less noisy than what you’re getting from the cameras, especially when there’s only 1 tag in view.
If you want to measure sdev anyway, then when calculating the average, if a sample is more than 180deg from the previous sample, add 360deg to the sample used to calculate the average and the sdev. So your array of -179, 179, 179 would become an array of 181, 179, 179, which can then be used to calculate sdev.
Previous sample just means that if you iterate through an array of samples, you save the one you just looked at. Not sure I know of an implementation somewhere, it’s just normal unwrapping code like you would use on a swerve module rotation optimizer.
Some pseudocode:
data[3] = (-179, 179, 179);
sum = 0;
lastsample = data[0];
for (i = 0; i < 3; i++) {
if (data[i] - lastsample > 180) data[i] -= 360
if (data[i] - lastsample < -180) data[i] += 360
lastsample = data[i]
sum += data[i]
}
average = sum / 3
Gyro will be more precise (higher repeatability of measurement) over a short time frame, but is subject to long-term drift which will degrade its accuracy (long-term is relative here, but it doesn’t mean as long as hours).
Fusing that high-precision, low accuracy measurement with vision (lower repeatability over the short-term, but high long-term accuracy) will give you a higher accuracy measurement than gyro alone.
If you don’t adjust the rotation with vision, you probably want to offer the driver or operator a way to reset the gyro (or adjust it) to maintain the correct notion of “this angle is directly away from the alliance wall.” (But, I think you want to adjust the rotation with vision.)
I agree; when our robot gets hit too hard, the pigeon2 drifts slightly. It’s good practice to use vision to correct gyro, although it’s best to keep the standard deviation of the filter high enough to avoid jitterings.
The answer above introducing the concept of circular mean is helpful, but the OP was asking about standard deviation. A good reference for code that implements circular standard deviation is the scipy stats function circstd. Search for “def circstd” in the _morestats.py source code from scipy to see the Python implementation. Java conversion should be straightforward.
The problem with using vision to update your odometry angle is that it’s both inaccurate (due to field and robot tolerances) and very noisy. I think there’s some teams who can use it well, but I think the vast majority of teams are going to be giving up performance by using vision for their yaw.
If you’re using an older gyro like a NavX or Pigeon 1, then this may not hold true, especially if you’re undergoing a lot of collisions. Forgot to include that in my last post.