To know the position of our robot on the field, we have recently used limelight on a swerve robot and add the vision measurements periodically by checking whether an apriltag exists and how big the apriltag is. When driving the robot, the robot shown on Field2d is jittery when seeing the tag and sometimes, when it’s not looking at the tag, it randomly teleports some distance away from the previous location (which I’m guessing is because the limelight detects random stuff to be tags). Any specific pointers on how to fix this? Additionally, when zeroing our gyro, the movements on field2d are inverted and we are confused on how to fix this issue when swerve is in field relative drive.
re: teleportation on false positives, one of our students added a check that compares the previous pose with the new update, and discards the update if it’s “too far” from the old one. the threshold was some multiple of the maximum possible velocity of the robot, so you have to also keep track of the update timestamps.
Could you post what your code currently looks like? It seems like you need to tune your Kalman filter standard deviation constants and discard vision measurements that are a certain distance from your encoder odometry pose.
Thank you. Our code is pretty much the basic code for adding vision measurements as we just check the size of the tag and update using the code given to find the limelight’s capture and sending latency. We use the default standard deviations on the Pose Estimator as we are not sure how to tune them. Any advice on how to do that?
Of course. We have a Vision class and a seperate PoseEstimator class. The Vision class reads bot poses from limelight and only updates the botPose when it is within the field. Our PoseEstimator class then reads these values from vision and compares them with the odometry from the drivetrain class (such as a Swerve.java file) to decide whether to add the estimate to the SwerveDrivePoseEstimator.
Speaking of the SwerveDrivePoseEstimator, if you check out our PoseEstimator.java file, we create it in our constructor (line 45 of PoseEstimator.java) like so:
We store the standard deviation variables in a constants file, like this:
public static class PoseConfig {
// Increase these numbers to trust your model's state estimates less.
public static final double kPositionStdDevX = 0.1;
public static final double kPositionStdDevY = 0.1;
public static final double kPositionStdDevTheta = 10;
// Increase these numbers to trust global measurements from vision less.
public static final double kVisionStdDevX = 5;
public static final double kVisionStdDevY = 5;
public static final double kVisionStdDevTheta = 500;
}
We used the values from Team Spectrum 3847’s X-Ray robot from last year and they have been working well for us so far.
Essentially, the lower these standard deviation constants are, the more the PoseEstimator will trust vision measurements.
Additionally, if you want to use the SwerveDrivePoseEstimator, you will need to add a command to reset your drivetrain odometry to the poseEstimator location somewhere as well (we still need to implement this). You also need to be keeping track of the timestamps of when Limelight gets a pose. Feel free to use our code and let me know if you have any other questions.
Thank you very much, I’ll try them out today. Also, could you provide pointers on how to fix the movements on field2d which are inverted when swerve is in field relative drive. We are using the MK4is.
right, I get what these numbers mean, I think. what I’m wondering is how it can possibly really be listening to the camera at all, if the camera stddev is 50X the wheel odometer stddev.
oh also, we found the camera theta to be less accurate than the gyro, so we do the pose calculations using the gyro theta. I can’t tell from this snippet if you’re doing that; with stddev of 500 it seems like you don’t use the camera theta.
Yeah, we essentially only want to trust the gyro because even if we encounter defense and our swerve encoder odometry becomes inaccurate, the gyro will be correct.
What do you mean by this? SwerveDrivePoseEstimator is intended to be a drop-in replacement for SwerveDriveOdometry- it fuses standard odometry measurements with updates from vision. You shouldn’t need to be resetting odometry or the pose estimator if you have global pose estimates from a vision coprocessor.
For something like path following. To make sure tools like pathplanner use the pose from SwerveDrivePoseEstimator, not the drivetrain SwerveDriveOdometry.
To your point though, we should instead refactor our Swerve class to have our pose supplier be SwerveDrivePoseEstimator…thank you for fact checking me.
If you make them too low then you risk having a lot of jitter and pose jumps in your estimate. You can tune them to your setup. We ended up making them 1.5 by the end of the season.