Calculating the standard deviation of rotations

I have a few estimated robot poses from the cameras, labeled p1, p2, p3…, and I want to derive the standard deviation of the vision measurement instead of using a constant.

I can use the statistic formula to get the standard deviation for translation ( x and y, in meters). But for rotation, there is a wrapping issue. For example, if the measured rotations are {-179deg, 179deg, 179deg}, the standard deviation derived by math is pretty high, but the measurements are very close to each other. So, how can I calculate the standard deviation for rotation values?

2 Likes

In general, I would reject rotation values from cameras entirely and trust only the rotation from the gyro. If possible just use the XY pose and ignore the gyro reading. This is because the onboard gyro of the robot is likely to be far more accurate and less noisy than what you’re getting from the cameras, especially when there’s only 1 tag in view.

If you want to measure sdev anyway, then when calculating the average, if a sample is more than 180deg from the previous sample, add 360deg to the sample used to calculate the average and the sdev. So your array of -179, 179, 179 would become an array of 181, 179, 179, which can then be used to calculate sdev.

6 Likes

Thanks for the opinion!

I think this probably works, but I’m not sure what you mean by “previous sample”. Is there an existing implementation somewhere?

Previous sample just means that if you iterate through an array of samples, you save the one you just looked at. Not sure I know of an implementation somewhere, it’s just normal unwrapping code like you would use on a swerve module rotation optimizer.

Some pseudocode:

data[3] = (-179, 179, 179);
sum = 0;
lastsample = data[0];

for (i = 0; i < 3; i++) {
  if (data[i] - lastsample > 180) data[i] -= 360
  if (data[i] - lastsample < -180) data[i] += 360

  lastsample = data[i]
  sum += data[i]
}
average  = sum / 3

Kind of like that?

1 Like

To answer this question directly, (no matter the application) here is something to go on:

This is also generalizeable to the nth dimensional case.

3 Likes

Gyro will be more precise (higher repeatability of measurement) over a short time frame, but is subject to long-term drift which will degrade its accuracy (long-term is relative here, but it doesn’t mean as long as hours).

Fusing that high-precision, low accuracy measurement with vision (lower repeatability over the short-term, but high long-term accuracy) will give you a higher accuracy measurement than gyro alone.

If you don’t adjust the rotation with vision, you probably want to offer the driver or operator a way to reset the gyro (or adjust it) to maintain the correct notion of “this angle is directly away from the alliance wall.” (But, I think you want to adjust the rotation with vision.)

1 Like

I agree; when our robot gets hit too hard, the pigeon2 drifts slightly. It’s good practice to use vision to correct gyro, although it’s best to keep the standard deviation of the filter high enough to avoid jitterings.

The answer above introducing the concept of circular mean is helpful, but the OP was asking about standard deviation. A good reference for code that implements circular standard deviation is the scipy stats function circstd. Search for “def circstd” in the _morestats.py source code from scipy to see the Python implementation. Java conversion should be straightforward.

3 Likes

At least for the Boron, drift is on the order of a degree per match (or less).

https://docs.reduxrobotics.com/canandgyro/performance

The Pigeon 2 should perform well enough too.

The problem with using vision to update your odometry angle is that it’s both inaccurate (due to field and robot tolerances) and very noisy. I think there’s some teams who can use it well, but I think the vast majority of teams are going to be giving up performance by using vision for their yaw.

If you’re using an older gyro like a NavX or Pigeon 1, then this may not hold true, especially if you’re undergoing a lot of collisions. Forgot to include that in my last post.

Another problem with vision to update gyro is the more stable faster algorithms use the gyro as an input to the pose estimation. (MegaTag2 )

edit:
One way to is to have a known position (like the amp) and when you are lined up and squared on it set your pose with low standard deviations.

1 Like

Megatag2, ironically, has (undocumented?) stability problems due to the latency between camera and gyro. If you’re in motion, stick to Megatag1.

1 Like