Limelight Odometry question

Hi, we’ve recentrly replaced our Apriltag tracking camera using Photonvision for a Limelight 3, we’ve managed to add the vision measurements to the pose estimator with no problems but the thing is that when the camera stops detecting tags, instead of using the last know position and encoder readings, it returns the robot the odometry origin (bottom left corner) and it cant get out of it even with the motor encoder readings.

Pose estimator behaviour

Am im missing something?

Odometry updating method:

public void odometryWvision(){

    LimelightHelpers.Results results = 

      Pose2d camPose = LimelightHelpers.toPose2D(results.botpose_wpiblue);
      m_field.getObject("Cam est Pose").setPose(camPose);
    } else {
      m_field.getObject("Cam est Pose").setPose(m_poseEstimator.getEstimatedPosition());


I used the Photonvision apriltag pose estimation example and just replaced the photon methods with Limelights.

I would check the bot poses you are adding. We filter out “zero bot poses”.


We get the TV value from the limelight and only add vision measurements if that limelight is currently seeing a target (TV = 1)

I did see this on another thread: Introducing Limelight Lib - #59 by jdaming

Basically, the limelight helper function was returning result.valid as true even when no target was in view. I’d recommend just using the network tables API they provide.

Here are some resources:
My team’s robot code for the limelight using the network tables API: 2023-ChargedUp-177-Bot-2/ at nechamps-improvements · BobcatRobotics/2023-ChargedUp-177-Bot-2 · GitHub
Our Pose estimator subsystem: 2023-ChargedUp-177-Bot-2/ at nechamps-improvements · BobcatRobotics/2023-ChargedUp-177-Bot-2 · GitHub
The limelight network tables API from their documentation: Complete NetworkTables API — Limelight 1.0 documentation

Best of luck!

1 Like

Our code (linked above) also has a “tv” alternative in checking if it returns multiple 0’s

1 Like

Yeah I totally agree with @Hollisms and the other commenters. When the pose sees no tag, the limelight/photon vision returns 0,0,0 for the bot’s pose instead of a null value, so you have to manually check. One additional note I would like to give is to also check that the tag you are measuring is less than some distance away from you to reject noisy data. We personally reject all tags we see that are more than a 1.5 meters away from our robot.

1 Like

I’ll third the ideas here.

Our process is

  1. check if there’s a valid tag detected
  2. continue to add the tag if any of the conditions are met
    2a. 1 tag with an area greater than ta=0.8 (field measure this number; we used the “ta” when in front of charging station in the grid) and pose is within 1 meter of currently estimated pose
    2b. 2 or more valid tags
  3. adjust standard deviations of vision measurement based on above conditions.
    3a. we trust 2 or more tags the most with (xy, degrees) standard deviations around (0.6, 5), then 1 large tag at about (0.9, 12), and last a smaller single tag at (2.0, 40)
  4. add vision estimated pose to pose estimator with latency

PoseLatency is an object with a Pose2d and latency. It wraps the data from limelight so we can return both together.

public void updatePoseEstimatorWithVisionBotPose() {
    PoseLatency visionBotPose = m_visionSystem.getPoseLatency();
    // invalid LL data
    if (visionBotPose.pose2d.getX() == 0.0) {

    // distance from current pose to vision estimated pose
    double poseDifference = m_poseEstimator.getEstimatedPosition().getTranslation()

    if (m_visionSystem.areAnyTargetsValid()) {
      double xyStds;
      double degStds;
      // multiple targets detected
      if (m_visionSystem.getNumberOfTargetsVisible() >= 2) {
        xyStds = 0.5;
        degStds = 6;
      // 1 target with large area and close to estimated pose
      else if (m_visionSystem.getBestTargetArea() > 0.8 && poseDifference < 0.5) {
        xyStds = 1.0;
        degStds = 12;
      // 1 target farther away and estimated pose is close
      else if (m_visionSystem.getBestTargetArea() > 0.1 && poseDifference < 0.3) {
        xyStds = 2.0;
        degStds = 30;
      // conditions don't match to add a vision measurement
      else {

          VecBuilder.fill(xyStds, xyStds, Units.degreesToRadians(degStds)));
          Timer.getFPGATimestamp() - visionBotPose.latencySeconds);

The TV value instead of the results.valid fixed my problem, but still there’s other cool suggestions that i could implement.

Thanks everyone!

1 Like

Is there a link of this code? The methods on the visionSystem class could be useful

It’s just a wrapper that contains the limelight and uses LimelightHelpers