Hi, we’ve recentrly replaced our Apriltag tracking camera using Photonvision for a Limelight 3, we’ve managed to add the vision measurements to the pose estimator with no problems but the thing is that when the camera stops detecting tags, instead of using the last know position and encoder readings, it returns the robot the odometry origin (bottom left corner) and it cant get out of it even with the motor encoder readings.
Am im missing something?
Odometry updating method:
public void odometryWvision(){
m_poseEstimator.update(Rotation2d.fromDegrees(getAngle()),
encoderCountsToMeters(leftEncoder.getPosition()),
encoderCountsToMeters(rightEncoder.getPosition()));
LimelightHelpers.Results results =
LimelightHelpers.getLatestResults(VisionConstants.tagLimelightName).targetingResults;
if(results.valid){
Pose2d camPose = LimelightHelpers.toPose2D(results.botpose_wpiblue);
m_poseEstimator.addVisionMeasurement(camPose,
Timer.getFPGATimestamp());
m_field.getObject("Cam est Pose").setPose(camPose);
} else {
m_field.getObject("Cam est Pose").setPose(m_poseEstimator.getEstimatedPosition());
}
m_field.setRobotPose(m_poseEstimator.getEstimatedPosition());
}
Basically, the limelight helper function was returning result.valid as true even when no target was in view. I’d recommend just using the network tables API they provide.
Yeah I totally agree with @Hollisms and the other commenters. When the pose sees no tag, the limelight/photon vision returns 0,0,0 for the bot’s pose instead of a null value, so you have to manually check. One additional note I would like to give is to also check that the tag you are measuring is less than some distance away from you to reject noisy data. We personally reject all tags we see that are more than a 1.5 meters away from our robot.
continue to add the tag if any of the conditions are met
2a. 1 tag with an area greater than ta=0.8 (field measure this number; we used the “ta” when in front of charging station in the grid) and pose is within 1 meter of currently estimated pose
2b. 2 or more valid tags
adjust standard deviations of vision measurement based on above conditions.
3a. we trust 2 or more tags the most with (xy, degrees) standard deviations around (0.6, 5), then 1 large tag at about (0.9, 12), and last a smaller single tag at (2.0, 40)
add vision estimated pose to pose estimator with latency
PoseLatency is an object with a Pose2d and latency. It wraps the data from limelight so we can return both together.
public void updatePoseEstimatorWithVisionBotPose() {
PoseLatency visionBotPose = m_visionSystem.getPoseLatency();
// invalid LL data
if (visionBotPose.pose2d.getX() == 0.0) {
return;
}
// distance from current pose to vision estimated pose
double poseDifference = m_poseEstimator.getEstimatedPosition().getTranslation()
.getDistance(visionBotPose.pose2d.getTranslation());
if (m_visionSystem.areAnyTargetsValid()) {
double xyStds;
double degStds;
// multiple targets detected
if (m_visionSystem.getNumberOfTargetsVisible() >= 2) {
xyStds = 0.5;
degStds = 6;
}
// 1 target with large area and close to estimated pose
else if (m_visionSystem.getBestTargetArea() > 0.8 && poseDifference < 0.5) {
xyStds = 1.0;
degStds = 12;
}
// 1 target farther away and estimated pose is close
else if (m_visionSystem.getBestTargetArea() > 0.1 && poseDifference < 0.3) {
xyStds = 2.0;
degStds = 30;
}
// conditions don't match to add a vision measurement
else {
return;
}
m_poseEstimator.setVisionMeasurementStdDevs(
VecBuilder.fill(xyStds, xyStds, Units.degreesToRadians(degStds)));
m_poseEstimator.addVisionMeasurement(visionBotPose.pose2d,
Timer.getFPGATimestamp() - visionBotPose.latencySeconds);
}
}