Timestamp parameter when adding limelight vision to odometry

Hello! First time our team is using odometry. And I’m wondering if I’ve properly set the timestamp parameter with the limelight and using yagsl. Here is a minimal reproduction:

    yagsl.addVisionMeasurement(
      LimelightHelpers.getBotPose2d("limelight"),
      Timer.getFPGATimestamp() - LimelightHelpers.getLatency_Capture("limelight") - LimelightHelpers.getLatency_Pipeline("limelight")
    );

I just want to make sure I’m understanding the timestamp parameter correctly. is getting the timestamp and the subtracting by both the latencies correct?

For reference here is the yagsl addVisionMeasurement implementation:

  public void addVisionMeasurement(Pose2d robotPose, double timestamp) {
    odometryLock.lock();
    swerveDrivePoseEstimator.addVisionMeasurement(robotPose, timestamp);
    odometryLock.unlock();
  }

Thank you in advance!!! :smile_cat:

You’re close, but I believe you’re missing a couple of things. First, Timer.getFPGATimestamp() returns the timestamp in seconds, while the limelight latency is in milliseconds. So you could do this:

yagsl.addVisionMeasurement(
      LimelightHelpers.getBotPose2d("limelight"),
      Timer.getFPGATimestamp() - (LimelightHelpers.getLatency_Capture("limelight") + LimelightHelpers.getLatency_Pipeline("limelight")) / 1000.0
    );

Additionally, you’re missing the latency that it takes to parse the json. This is actually a significant amount of time, so I would include it. For some reason though, LimelightHelpers is missing a method to get the json parsing latency, so you have to use the network tables. This is what I did for my team’s robot:

  /**
   * Returns the latency in seconds of when the limelight that is being
   * used for pose estimation calculated the robot's pose. It adds the
   * pipeline latency, capture latency, and json parsing latency.
   */
  public double getLatencySeconds() {
    return (currentlyUsedLimelightResults.targetingResults.latency_capture 
    + currentlyUsedLimelightResults.targetingResults.latency_pipeline 
    + currentlyUsedLimelightResults.targetingResults.latency_jsonParse) / 1000.0;
  }

We have currentlyUsedLimelightResults being updated in the execute of our vision subsystem (you could do this in your drive subsystem’s execute too) with

currentlyUsedLimelightResults = LimelightHelpers.getLatestResults(VisionConstants.FRONT_LIMELIGHT_NAME);

Thank you! Especially with the abstraction of updating currentlyUsedLimelightResults in the subsystem execute, that’s a nice way of doing and thinking about it. :smile:

1 Like

also, for anybody else looking at this I found a tutorial:

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.