Pose estimation and field-oriented drive

With the introduction of AprilTags, we are planning to use the SwerveDrivePoseEstimator and I’m wondering about using it to deal with gyro drift for field-oriented swerve drive. I’m not sure if I’m best off trusting the gyro for the duration of a match, or if the angle from the pose estimator will prove to be more accurate.

The ChassisSpeeds.fromFieldRelativeSpeeds method takes the gyro angle, so field oriented drive will drift if the gyro drifts. Would it make sense to attempt to reset the gyro angle with the angle from the pose estimator, or use the estimated pose angle for calculating chassis speeds?

2 Likes

Use the estimated pose angle for calculating chassis speeds. With pose estimators/odometry, you don’t reset the gyro at all, you just reset the pose angle and let the change in the gyro apply to the new pose.

5 Likes

I agree that makes sense. However, I noticed the WPILib example does not use the estimated pose angle for chasis speeds, it uses the gyro. Is that an oversight or is there reasoning behind it?

I’m not aware of any reason to do that. Seems like an oversight.

I’m unsure how long the pose estimators existed, when we used this we hooked up the gyro directly as the example illustrates… maybe the example is a little simplistic.

The reason why you would use the gyro’s angle vs the estimated angle from the pose estimation is most likely due to the estimated angle being highly variable, which will add more error to the correction to your pose when you incorporate the estimated pose. For most cases in FRC, relying on your gyro as the only truth for your heading should be sufficient, so long as the gyro is of decent quality. I think anything around the level of a navX or better should be good enough and the CTRE Pidgeon2 is definitely good enough, based on my experiences last year, though I don’t have any solid numbers to really back this up.

7 Likes

Is this a limitation of retro reflective tape in particular? AprilTags have a fairly noiseless rotation output.

In my experience so far, they are pretty noisy beyond a couple meters. But, I’m not proposing using the vision angle directly, I’m considering using the angle from the SwerveDrivePoseEstimator which fuses the gyro and vision angles.

Drift is one concern, but the other is the initial pose. We haven’t done a holonomic drive before so I might be overestimating how big of a deal this is. Depending on the game, the robot may be placed on the field without a reference to make sure it’s at the expected initial pose. For example, in 2022 if we were running a 2-ball auto around the hub, the robot might not be aligned with a wall or tape at a known angle. If it’s off by a few degrees it might be challenging to drive across the field at top speed.

This is where I thought it might make sense to seed the gyro angle with the estimated pose when the robot first becomes enabled (assuming an AprilTag will be visible), and then rely on the gyro for the rest of the match.

2 Likes

It’s an oversight.

A new gyro measurement is being fed to the Kalman filter every loop, so the heading estimate shouldn’t be highly variable. If it is, your vision measurement standard deviation is too low.

Ideally, the pose estimator’s Kalman filter would include a heading bias state to account for gyro drift. The gyro measurement model would assume the heading is ground truth plus a bias and zero-mean noise, and the vision measurement model would assume the heading is ground truth plus zero-mean noise. That means the vision measurement would allow the filter to estimate the bias and subtract it from future gyro measurements.

8 Likes

My understanding is that retroreflective tape is susceptible to inaccurate rotation results as a byproduct of using solvePnP using the corners of a detected contour. The accuracy of the results of using solvePnP on retroreflective tape is highly dependent on how accurately you can detect the entire contour (which is sensitive to lighting changes and your distance to the target), as well as the resolution of your camera.

AprilTags is less susceptible to the problem since it assumes a square marker when determining the tag pose. I haven’t looked at the rotation results from using pose estimation wtih AprilTags closely, but I assume you would run into similar issues if your camera resolution is low enough or if the detected target is far enough away.

One of the common solutions for vision pose correction used this year (which is similar to what my team did) did not use solvePnP. This solution used the distance to the upper hub by using the angle to the top of the retroreflective ring to calculate the horizontal distance to it (See Estimating Distance or Getting in Range of the Target) and then add the radius of the upper hub to get the distance to the center of the field. You can then break that distance into X and Y translation components using the total rotation of your drivetrain heading, turret (if you had one), and the horizontal angle of the upper hub in the camera view, and then shift that translation using the known fixed position of the hub/field center to get your estimated robot position. From here, you can just rely on your gyro’s heading to feed into the vision pose estimator and you should get a decent result.

Note that this solution worked this year because we had a centralized, unique target. This solution becomes less viable in games with multiple similar targets because you would need a way to distinguish similar targets. For example, this could also work in 2020/2021 if you could distinguish between the red and blue goals, but this approach would be impractical in 2019 with all the goals/loading stations being essentially the same. With AprilTags, each tag on the field should be unique, so you can probably use the same technique for pose estimation to avoid some computational overhead.

2 Likes

We took a very similar approach this year, but it’s not really relevant to this topic since it depends on the reading from the gyro and does not help counteract drift or incorrect initial robot placement.

It was a good enough strategy for us in 2022. We had a tank-drive, so no field oriented drive (the topic of this thread) and after autonomous the only reason we wanted to estimate the robot’s pose was to know what direction to turn when the driver pressed the “fire” button, even if the reflective tape was not in the camera FOV. Since we didn’t actually care where we were on the field it didn’t matter if the gyro drifted. We reset our pose whenever we saw the reflective tape, so from the robot’s perspective it was the field that was drifting, not the gyro.

1 Like

Hey, sorry to jump in randomly like this, but have there been any changes to pose estimators from the competition season (2022)? We tried it out for a bit but switched away from it when we noticed that it crashed a few times and killed our robot tracker. We tried debugging it for a bit but couldn’t figure out what caused the issues we were seeing.

If it would help, I could share some of the debugging we did when looking into the issue ourselves.

1 Like

Afaik, the crashing has been solved.
Also of important note is this new pr which adds position delta swerve odometry like the differential drive version.

2 Likes

Oh that’s sick! Thanks for letting me know!

2 Likes

Without this, I don’t think the current (2022) pose estimator helps much with drift or incorrect robot placement unless a vision measure is nearly always available.

I tried a test tonight with 5-degrees for the state and local measurement standard deviations, and 1-degree for vision. I set an incorrect initial pose rotation for testing. When an AprilTag is visible the pose estimator quickly corrects the heading. However, if an AprilTag is not visible the estimated pose quickly drifts back to the incorrect initial pose, reflecting the gyro reading.

1 Like

Technically, I think you really ought to be feeding the angular velocity into the Kalman filter-- because that’s what the gyro is really measuring. Although I don’t know if any of the gyros common in FRC give you access to the raw unintegrated reading.

EDIT: Well, that’s how FOG/laser ring gyros work, at least. It’s been a long time since I’ve worked with MEMS gyros professionally; they might measure angular acceleration and double-integrate?

I’m pretty sure they all do

The reason we can get reasonable data from MEMS gyros but not MEMS accelerometers is because they only need a single integration

1 Like

does not help counteract drift

Just an FYI to anyone who might not know, what some teams did to counter gyro drift was place two of the same gyros on top of each other but one upside down and then take the average of the two. From what i’ve heard, this counters the drift really well.

Depending on how much you trust your angle off of the April tags you could reset gyro to the new angle after an observation. The best solution would be a change to the pose estimator to use the gyro like wheel odometry and apply it as state changes instead of a value. (if it doesn’t already work this way)

1 Like

I think most MEMS structures actually directly measure the angular velocity, not acceleration. There should be only a single integration (usually RIO-side) to get to velocity. Concur with others, at least some gyros expose the rate, which likely yes is the better thing to include in a kalman filter.

1 Like