PhotonVision 2D AprilTag pipeline getYaw() returning angle to left of tags rather than center of tags?


We switched to PhotonVision this season and to keep things simple we use the 2D AprilTag pipeline similarly to what we do with the object detection pipeline (i.e. we simply estimate angle and distance to target).

Everything is working as expected with the object detection pipeline, but with the 2D AprilTag pipeline it appears that the yaw returned by getYaw() is the yaw to the left of the tags rather than the center of the tags (i.e. the crosshair is aligned with the left of the tags rather than to the small green crosses at their center when yaw is zero).

Is that expected, and if so, how can we get the angle to the center of the tags?



Any suggestions? Thanks!

We claim to use the tag’s center, which uses the center of the tag (or so wpilib claims) from the wpilib apriltag detector. So I’m not sure how that squares with what you described above

Thanks. That’s weird. That is not what we observe in reality.

We observed that when the crosshair is aligned with the apriltag, it becomes about -5 degrees off whereas if it was slightly to the left it would be 0.

This is how it looks when it’s aligned on the left, it’s about 0 degrees.

Yeah I agree that zero yaw looks to be on that left edge. Is zero pitch at the top too? Either way, you could dig into the wpilib apriltag code to confirm that the center it reports is actually the center of the tag?

1 Like

Looking at the WPILib code, I end up at this reinterpret_cast:

allwpilib/apriltag/src/main/native/include/frc/apriltag/AprilTagDetection.h at e64c20346dfe3252098f0efe51a93bb766881b82 · wpilibsuite/allwpilib (

I am not sure if we are reinterpreting the right data., but I see that @Peter_Johnson has worked on related files, so I’m reaching out to him for his amazing wisdom.

Peter, may I ask for your take on the matter? Thanks!

I did some local testing and was unable to reproduce this with an uncalibrated or a well-calibrated resolution for apriltag or aruco pipelines with 36h11 and 16h5 tags. So not sure what the difference between your setup and mine could be.

Thank you for taking the time to investigate further. To be sure, are you in “2D AprilTag” mode?

Does calibration matter for 2D?

PS: the green little crosshairs are exactly where they should be. So somewhere the centers of the tags are properly detected. It seems it’s just as the info makes its way through layers of code that it might somehow be misinterpreted/confused with a corner?

Yep, I was using 2d mode. Not that 3d mode should change the pitch/yaw numbers, it’s only adding extra 3d information.

1 Like

Another small detail that might matter. We are not using getBestTarget() but rather we are iterating through getTargets() to find a high value target.

Can you try the code snippet at FRC2495-2024/src/main/java/frc/robot/sensors/ at bcb2d8c884d39d7196811b800efc4a627b26fce6 · FRC2495/FRC2495-2024 ( ?

Thanks again!

PS: I don’t recall seeing pitch and yaw in the PhotonVision built-in UI when operating in 3D mode. Are those values visible there?

Hello @thatmattguy

Any chance you could try the code snippet above against PhotonVision in Apriltag 2D mode with an Arducam? (we use an OV2311)


I don’t own an Arducam (and as a broke college student, probably can’t easily get one either :/)

Sorry to hear that. Things will get better once you graduate.

Can you try the code snippet with another camera? Just to make sure that the behavior is the same between getBestTarget() and getTargets() ?

Thanks again.

@bankst Any suggestions? Thanks.

What is your proof the crosshairs drawn in shuffleboard are at the zero yaw mark?

None, but isn’t that what we should expect?

Angle to AprilTag displayed in bottom right corner of screenshot shows what PhotonVision returns.

That’s not what I’m asking.

I assume you assume zero degrees of yaw implies the middle column of pixels in the image.

Do you know for sure the middle column of pixels is directly under that crosshairs you are using for reference ?

It seems like a valid assumption, but you have to remember you have many tools in the loop. Minimize your assumptions.

Fair point.