Weird disrepancy between SwerveDrivePoseEstimator and DifferentialDrivePoseEstimator

Looking at the docs, I can see that the default vision measurement STD devs for the DifferentialDrivePoseEstimator seem to be 0.1,0.1, and 0.1, whereas the default vision measurement STD devs for SwerveDrivePoseEstimator seem to be 0.9,0.9, and 0.9. Can anybody explain this disrepancy?

The drivetrain STD devs of both seem to be the same.

Old documentation, 0.9 was for vision based on systems like reflective tape which were extremely in accurate, 0.1 is for April tags but when handling it you should dynamically change the uncertainty as you get closer to the tag.

They worked well in the physics sim.

The docs are actually up to date, and the chosen values were intentional.

1 Like

Is there a real-life explanation to this however?

Like why would vision measurement on a swerve drivetrain work less well than vision measurement on a differential drivetrain?

1 Like

Of those are the stddevs for the encoders rather than the vision then that was my bad it’s about accuracy, differential drives are far less prone to slipping using omni or mecanums changes that then swerve and so you want a higher uncertainty. But you should play around with the numbers and adjust to your systems need to find the best performance.

No, it’s the standard deviations of the odometry pose and the vision pose. We picked values that seemed to make sense for odometry pose, then tuned vision pose to have a certain decay rate. Follow Pose Estimators — FIRST Robotics Competition documentation for more accurate per-robot tuning.

? I thought the statement was that the numbers varied between the default for swerve and differential drive, I’m well aware that the DifferentialDrivePoseEstimator uses a 3x1 of 0.9 for vision and 0.1 for encoder/gyro measurements, as one is highly accurate frame to frame and the other is less accurate but never drifts. I’m aware of how it works I wrote an HDrivePoseEstimtator. Sorry if this was just my confusion over the question.

Not in particular - they just happen to be the values chosen for the unit tests used to validate pose estimation. The more correct adjustment would have been to the odometry standard deviations in this case.

Regardless, the result is the same. In the future it’s probably more effective if we change the odometry std dev measurements per-drivetrain rather than the vision std dev measurements.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.