Removing Vision Pose Noise From WPILIB Swerve Pose Estimator

Hi Everyone,

We are using Photon Vision’s Vison Pose Estimator to return a Pose2d, and then adding that Pose to our SwervePoseEstimator to compute our current Robot Pose on the field.

This mostly works pretty well when stationary.

Unfortunately I’ve noticed it can get pretty noise as you drive. So for example if we are using code to target a pose and drive to it, I’ve seen our robot pose behave kinda erratically.

I guess we could change it to not be a constant feedback loop (every update loop we would take our current robot pose and then calculate how to drive to reach that pose). and more take a snapshot and start the command to get there.

I’m just curious if anyone has had any similar struggles with this and how they accomplished this. Some of this might be noisy data coming back from photonVision unfortunatly as well (although it seems pretty stable… i suspect once the robot starts moving it becomes challenging)

2 Likes

What are the standard deviations you are are using for the vision measurement? How does it compare to the noise when moving? How does it compare to the state standard deviations, and the noise on those measurements?

Hey, have you made any progress in this? I’m also running into this issue. Moving back six inches in real space while its integrating vision data places the estimated robot pose at like positions 100s or 1000s of meters away(in both the x and y direction) from where it should be reading.

Both the vision data and the swerve odometry data work great and reliably on their own but merging them together seems to bring problems that I don’t know where it could originate from.

setting up the standard deviations to favor the vision data makes its pose more reliable when looking at the april tags but during the moments when the camera can no longer register an april tag the position the pose will jump to a wacky number. When driving while the bot is turned away from the april tag it will again start incrementing as according to real life, where moving the bot 1 meter will change the bot’s pose a meter.

its a super weird error and i’m still trying to figure out its origin, please let me know anything you’ve tried in the interim since when this was originally posted.

What are the wacky numbers? Plots, graphs, and real numbers would help. I suspect you could be adding the pose 0,0,0 into the pose estimator. graph everything.

Here’s our vision code that seems to work pretty reliably:

package frc.robot.subsystems;

import edu.wpi.first.math.geometry.Pose2d;
import edu.wpi.first.math.geometry.Rotation2d;
import edu.wpi.first.math.geometry.Translation2d;
import edu.wpi.first.math.util.Units;
import edu.wpi.first.networktables.NetworkTable;
import edu.wpi.first.networktables.NetworkTableInstance;
import edu.wpi.first.wpilibj.DriverStation;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;
import edu.wpi.first.wpilibj2.command.SubsystemBase;
import frc.robot.subsystems.HelperSubsystems.LimelightHelpers;

public class VisionSubsystem extends SubsystemBase {

    NetworkTable LimelightTable;
    String limelightName;
    double tv;
    double ta;
    double tl;

    LimelightHelpers.LimelightResults results;

    public VisionSubsystem(String limelightName) {
        this.limelightName = limelightName;

        LimelightTable = NetworkTableInstance.getDefault().getTable(limelightName);



        LimelightOn(false);
    }

    public boolean LimelightOn(boolean on) {
        if (on) {
            LimelightTable.getEntry("ledMode").setNumber(3);
            return true;
        } else {
            LimelightTable.getEntry("ledMode").setNumber(1);
            return false;
        }
    }

    public void updateValues() {

        //TODO: Maybe switch these to use the limelight helper sub
        tv = LimelightTable.getEntry("tv").getDouble(0);
        ta = LimelightTable.getEntry("ta").getDouble(0);
        tl = LimelightTable.getEntry("tl").getDouble(40);

        results = LimelightHelpers.getLatestResults(limelightName);

    }

    public boolean isGoodTarget() {
        return ta >= 0.5  || results.targetingResults.targets_Fiducials.length > 1 && ta>0.4;
    }

    public boolean hasTargets() {
        return tv == 1;
    }

    public Pose2d getVisionEstimatedPose() {

        double[] bot_pose;

        if (DriverStation.getAlliance() == DriverStation.Alliance.Blue) {
            bot_pose = LimelightTable
                    .getEntry("botpose_wpiblue")
                    .getDoubleArray(new double[1]);
        } else {
            bot_pose = LimelightTable
                    .getEntry("botpose_wpired")
                    .getDoubleArray(new double[1]);
        }

        double bot_x = bot_pose[0];
        double bot_y = bot_pose[1];
        double rotation_z = (bot_pose[5] + 360) % 360;


        return new Pose2d(
                new Translation2d(bot_x, bot_y),
                Rotation2d.fromDegrees(rotation_z));
    }

    public double getLatency() {
        return Timer.getFPGATimestamp() - Units.millisecondsToSeconds(tl);

        // maybe need camera_latency?
        // TODO: TEST 
        // return results.targetingResults.latency_capture;

        // TODO: TEST this breaks it for some reason
        // return llresults.targetingResults.latency_pipeline; 
    }

    @Override
    public void periodic() {

        
        updateValues();

        SmartDashboard.putBoolean("Vision Identified Tag", hasTargets());

        // if the limelight has a target
        if (hasTargets() && isGoodTarget()) {
            // grab data off network tables and clean it up a bit

            Pose2d currentPosition = DriveSubsystem.poseEstimator.getEstimatedPosition();
            Pose2d visionPose = getVisionEstimatedPose();

            // if the data is good, use it to update the pose estimator
            if (visionPose.getX() != 0 && visionPose.getY() != 0 &&

            // these check if vision is within a meter of our estimated pose otherwise we
            // ignore it
                    Math.abs(currentPosition.getX() - visionPose.getX()) < 1 &&
                    Math.abs(currentPosition.getY() - visionPose.getY()) < 1) {

                DriveSubsystem.poseEstimator.addVisionMeasurement(visionPose, getLatency());

                
            }
        }
    }
}

We make sure that we do NOT update the poseEstimator if we do not have a target, as well as throw out tags that we deem “bad”, which is based on the Limelights “ta” value, as well as the number of fiducials that we see. We also choose not to trust the vision measurement if it’s more than a meter off from where our drive thinks we are. We might end up increasing this, in case the drive gets off too much.

4 Likes

I’ve noticed when driving across the full field our pose estimator has some settling time.

The default stdDeviations have the vision measurements pretty high.

I haven’t done a ton of testing but my guess on close vision measurements I’d probably make the vision measurements as trusted as the odometry

We ended up trusting our vision translation values more than our odometry. Since we throw out all the ‘bad’ data, our vision measurements should be fairly accurate and correct for any wheel slip…etc. We do end up trusting our odometry rotation a lot more than our vision, just because we know our Pigeon is accurate.

1 Like

lol wacky numbers to me were just pose estimations that were incredibly off

units are in meters

This is with standard deviations heavily favoring April tag camera. The moments where it jumps past the bounds is when it was turning away from the april tag. When it recognized the tag again, it jumped back down to an accurate pose estimate.

I only accept the vision measurements when the ambiguity is less than 0.01 but the issue is still happening.

When using the pre-set standard deviations the position would jump between really large distances until you stood stationary.

1 Like

Yeah I feel that.

We use photonVision and I feel like I see little to know bad data at close range from that.

I wish their poseEstimator class had a range limit function or something though. Maybe I’ll subclass it

2 Likes

What are your standard devation values currently, when that graph was made? Was it still the default?

1 Like

Pretty easy to do manually without subclass ing. If the current pose is within a certain x range, don’t call add vision measurements

Thats what we found to work best - we get our current pose, and if our vision measurement is off by more than a meter, it throws out the measurement. This fixes issues like momentary glitches, as well as throws out tags that we didn’t see too well and got a bad reading off of.

1 Like

We added a check for disabled state so that vision measurements that are being read at the beginning are accepted since they will likely be more than 1 meter off of (0,0).

if (RobotState.isDisabled() || 
   0.5 > robotPose.getTranslation().getDistance(
       poseTracker.getCurrentPose().getTranslation()
    )) {
    poseTracker.updateVisionMeasurements(robotPose, imageCaptureTime)
}
1 Like

Are you not worried about getting the latency compensated pose? I guess if your tolerance is big and your latency is low it won’t matter?

yeah true. Only thing is your pulling it out of the PhotonVision pose estimator so its almost chicken or the egg.

Take your current pose, compare it to the new estimated pose and determine if the distance makes sense. I guess you could just blindly trust your initial pose and let that ‘geo’ you on the field, and then after that make it respect being within a certain distance.

IMO i’d rather restrict what distance of targets get let into the photonVision pose estimation echo system but I guess thats just a similar way to do it. We have a back and front facing camera and when we initially put it out on the field we were getting so much noise from the back camera getting targets from the other side of the field.

This is a good solution, but we set our swerve pose estimator to a “relatively close” position during auto, so this isn’t a huge issue for us, since our tags would always be within a meter. We set the position based on which auto path is selected in our Auton Path chooser in Smart Dashboard, so our drive position is pretty close to where vision says it is.

Yeah, it mainly cleaned up issues we were having with confirming April tag positioning during testing when we added the distance filter. Once you supply a real assumed position, the distance filter seems to help with jitter.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.