Multi-Camera Setup and PhotonVision's Pose Estimator - Seeking Advice

Hello,

I’m reaching out to ask for some advice and guidance on a topic that our team is currently exploring: using multiple cameras for our robot. We’re interested in setting up a system that can detect AprilTags from multiple camera angles, with the goal of completely eliminating ambiguity.

Has anyone in the community successfully implemented a multi-camera setup? If so, how did you go about it? What does it look like? What challenges did you encounter, and how did you overcome them? We’d appreciate any insights or tips you could provide on this front.

Additionally, we’re considering using PhotonVision’s photon pose estimator in conjunction with our multi-camera setup. Has anyone had success with this particular feature while using multiple cameras?

Thank you in advance for your assistance and shared expertise. We’re excited to learn from your experiences and improve our robot’s capabilities.

Best regards,
Isaac
4509 - Mechanical Bulls

Here’s the gist of it.

You need to define some state standard deviation vectors:

/**
 * Standard deviations of model states. Increase these numbers to trust your
 * model's state estimates less. This matrix is in the form [x, y, theta]ᵀ,
 * with units in meters and radians, then meters.
 */
private static final Vector<N3> STATE_STDS = VecBuilder.fill(0.05, 0.05, Units.degreesToRadians(5));

/**
 * Standard deviations of the vision measurements. Increase these numbers to
 * trust global measurements from vision less. This matrix is in the form
 * [x, y, theta]ᵀ, with units in meters and radians.
 */
private static final Vector<N3> VISION_STDS = VecBuilder.fill(0.1, 0.1, Units.degreesToRadians(10));

You need to define constants to define the transforms to the cameras:

private static final Transform3d LEFT_CAMERA_TO_CENTER = new Transform3d(
            new Translation3d(Units.inchesToMeters(5.800), Units.inchesToMeters(-8.517), Units.inchesToMeters(43.3)),
            new Rotation3d(Units.degreesToRadians(0), Units.degreesToRadians(0), Units.degreesToRadians(35.895 / 2)));

    private static final Transform3d RIGHT_CAMERA_TO_CENTER = new Transform3d(
            new Translation3d(Units.inchesToMeters(5.760), Units.inchesToMeters(-12.707), Units.inchesToMeters(43.3)),
            new Rotation3d(Units.degreesToRadians(0), Units.degreesToRadians(0), Units.degreesToRadians(-35.895 / 2)));

You need to construct some objects.

private final SwerveDrivePoseEstimator poseEstimator;

private final PhotonCamera camRight;
private final PhotonCamera camLeft;

public final PhotonPoseEstimator photonPoseEstimatorRight;
public final PhotonPoseEstimator photonPoseEstimatorLeft;

public Subsystem() {
  poseEstimator = new SwerveDrivePoseEstimator(/*kinematicss object*/, /*ccw gyro rotation*/, /*module positions array*/, new Pose2d(0, 0), STATE_STDS, VISION_STDS);

  final layout =  AprilTagFields.k2023ChargedUp.loadAprilTagLayoutField(); // see docs for how to do this better and set origin for red alliance

  photonPoseEstimatorRight = new PhotonPoseEstimator(layout, PoseStrategy.MULTI_TAG_PNP, camRight, RIGHT_CAMERA_TO_CENTER);
  photonPoseEstimatorLeft = new PhotonPoseEstimator(layout, PoseStrategy.MULTI_TAG_PNP, camLeft, LEFT_CAMERA_TO_CENTER);

  photonPoseEstimatorRight.setMultiTagFallbackStrategy(PoseStrategy.LOWEST_AMBIGUITY);
  photonPoseEstimatorLeft.setMultiTagFallbackStrategy(PoseStrategy.LOWEST_AMBIGUITY);
}

In your update loop:

public void update()  {
  final Optional<EstimatedRobotPose> optionalEstimatedPoseRight = photonPoseEstimatorRight.update();
  if (optionalEstimatedPoseRight.isPresent()) {
    final EstimatedRobotPose estimatedPose = optionalEstimatedPoseRight.get();          
    poseEstimator.updateVisionMeasurement(estimatedPose.toPose2d(), estimatedPose.timestampSeconds);
  }

  final Optional<EstimatedRobotPose> optionalEstimatedPoseLeft = photonPoseEstimatorLeft.update();
  if (optionalEstimatedPoseLeft.isPresent()) {
    final EstimatedRobotPose estimatedPose = optionalEstimatedPoseLeft.get();          
    poseEstimator.updateVisionMeasurement(estimatedPose.toPose2d(), estimatedPose.timestampSeconds);
  }

  poseEstimator.update(/*ccw gyro rotation*/, /*module positions array*/);
}
2 Likes

Why would you not just add both vision estimates to one pose estimator instead of having two separate ones?

1 Like

I agree that is how we have implemented it—shoutout to 7028 and 5026 for the great code bases.

1 Like

I have one SwervePoseEstimator and two PhotonPoseEstimators.

This also has two PhotonPoseEstimators, they are in the PhotonRunnable class.

We do a similar encapsulation thing here TorqueLib/TorqueVision.java at master · TexasTorque/TorqueLib · GitHub.

2 Likes

I guess the question then is you incorporate std deviation vectors for each photon pose estimation using that camera and the drivetrain values, and then you have to then again incorporate the 2 photons pose estimation states and then the drivetrain values into the final swerve pose estimator.

Those 2 or 3 instances of reference to the drive’s position don’t have negative side effects at all?

I mean I guess if the std deviations are consistent there isn’t a problem.

My bad I should of stated my thoughts more clearly. The issue I had with your code was updateVisionMeasurement. As far as I know poseEstimator doesn’t have updateVisionMeasurement it has addVisionMeasurement. Saying updateVisionMeasurement I took that as you adding and setting. So addVisionMeasurement and update. The other reason why I posted my code is it uses dynamic std deviations which should help filter out inaccurate poses.

2 Likes

Good catch, updateVisionMeasurement is a typo.

The other reason why I posted my code is it uses dynamic std deviations which should help filter out inaccurate poses.

Is that with the confidenceCalculator method? I was thinking of adding something like that to our codebase.

1 Like

Correct. Credit to 5026 for this.

We’re using limelights but we just throw everything into a circular buffer, discard outliers and average the good data. We’re usually within centimeters.