How to compensate for pose latency?

Hi, we are using a machine learning model with a pinhole model for estimating note positions on the field. This works perfectly well for finding an absolute note position relative to the field while the robot is not moving. Due to the latency of the machine learning on our LL, our position is significantly less accurate when moving, as we are adding an old translation to the piece to a current robot pose.

To fix this we have tried maintaining a CircularBuffer of the last few poses so we can look back and see where we were in the past. The code we use that does this looks a little like this:

private CircularBuffer<Pose2d> previousSwervePoses = new CircularBuffer<Pose2d>(50);
private CircularBuffer<Double> previousSwervePosesTimestamps = new CircularBuffer<Double>(50);

public Pose2d getPoseAtTimestamp(double timestamp) {
    for (int i = 0; i < previousSwervePosesTimestamps.size(); i++) {
        if (previousSwervePosesTimestamps.get(i) <= timestamp) {
            return previousSwervePoses.get(i);
        }
    }
    return previousSwervePoses.getLast();
}

@Override
public void periodic() {
    previousSwervePoses.addFirst(currentState.Pose);
    previousSwervePosesTimestamps.addFirst(Timer.getFPGATimestamp());
}

We can then add an old transform to an old pose.

I understand also that if you update your pose estimator using vision, this can cause some jumps. Therefore, it may also make sense to use a separate pose estimator that still takes regular odometry updates but does not use vision for this string of previous poses, and then calculate a transform in the robot pose over the duration of the latency.

I’m mostly asking for a sanity check over
A: Does this make sense?
and B: Is there a simpler way to do this?

Thanks!

That depends on the assumptions that you are willing to make, the accuracy required, and the CPU time you need to save or have to burn.

If the latency is fairly constant and the poses added to the buffer with consistent timing then you could just access the (approximately correct) pose directly - if the latency is 5 iterations behind, just get(4 or 5).

The opposite of this simple scheme is you could add interpolation between two points.

Ideally these are to correct to the truth. Or maybe you are being vexed by algorithms that aren’t living up to their hype (some filtering required by the user!).

The jumps I’m talking about would be specifically examples where you, say, haven’t seen an AprilTag for a while, and your odometry has drifted a large amount from what is correct, say 1 meter.

Then the following things happen in two robot periodic loops:

Loop 1:
The robot records the position of itself at (0,0). (The robot is actually at (1,0))
The LL sees a note a meter ahead of it. (The note is at (2,0)) However, due to the delay from the pipeline, it is not actually sent to the robot this loop.

Loop 2:
The robot sees an AprilTag and updates its position to (1,0).
The robot finally gets the information on the note that was seen last iteration and sees a latency of 1 loop. It checks back 1 loop and sees that its position was (0,0). Since the note is a meter away, it must be at (1,0).

This is incorrect because the note is actually at (2,0). Aka, a jump in odometry from not seeing an AprilTag for a long time would cause an inaccurate note position relative to the robot.

A much simpler approach would be to subtract the chassis velocity from the current pose.

// Assume there is a 50 ms latency 
final double cameraAverageLatencySeconds = 50.0 / 1000.0; 
// Obtain robot speeds
ChassisSpeeds measuredSpeedsFieldRelative = ...;
// Trace where the robot is 50ms ago
// Use twist2d because velocity is continuous
Pose2d robotPoseAtLastCameraFrame = currentState.Pose.exp(new Twist2d(
     -measuredSpeedsFieldRelative.vxMetersPerSecond * cameraAverageLatencySeconds,
     -measuredSpeedsFieldRelative.vyMetersPerSecond * cameraAverageLatencySeconds,
     -measuredSpeedsFieldRelative.omegaRadiansPerSecond * cameraAverageLatencySeconds
));

This shouldn’t be too inaccurate, it’s safe to assume that there’s no significant change in robot velocity for like 50ms.

1 Like

Thank you for pointing this out! I will need to experiment with that. After checking some of our logs, I think that the latency is not large enough to actually justify recording a history of our poses, at least not for the level of precision our pinhole model has.

What is the purpose of Twist2d here? Are you assuming that the robot relative chassis speeds are remaining the same over the past 50ms? Correct me if I’m wrong, but wouldn’t it make sense to use a Transform2d if you were using something like driver control where the field oriented speeds are more likely to remain the same (especially if the robot is rotating)?

1 Like

Hum, I honestly don’t know. Twist2d is more appropriate if the robot-centric velocity remains the same. But yes you’re right, Transform2d is more suitable if the field-centric velocity remains the same.

Though I guess it’s not a huge difference :laughing:

Yeah, I believe that as rotation gets closer and closer to 0 (which would happen when time gets closer to 0), the differences between Twist2d and Transform2d dissapear.

1 Like