How to Properly perform 3D Transformations using WPILibMath Libraries?

I’ve been stuck on this problem for a few days and haven’t found any examples of how to do this. Brute forcing the problem hasn’t worked so I figured I would ask here since other teams probably have the same question.

I am trying to determine my camera position given the location of an AprilTag, as defined by their position given in the 2023-chargedup.json Pose and the tag’s translation and rotation given by the result of robotpy_apriltag’s AprilTagPoseEstimator. Assuming that the translation/rotation given by the AprilTag’s pose estimation is in an East-Down-North coordinate system, I would need to transform it into a North-West-Up coordinate system to match WPILib’s coordinate system. To do so, I assume I would have to do the following:

tagT3D = geometry.Translation3d(tagTranslation['x'], tagTranslation['y'], tagTranslation['z'])
tagR3D = geometry.Rotation3d(geometry.Quaternion(tagRotation['W'], tagRotation['X'], tagRotation['Y'], tagRotation['Z']))
tagPose3D = geometry.Pose3d(tagT3D, tagR3D)

tagToCameraTransform = AprilTagPoseEstimator.estimatePose(tag)

wpiTransform = CoordinateSystem.convert(tagToCameraTransform,
                                        CoordinateSystem.EDN(),
                                        CoordinateSystem.NWU())


cameraPose = tagPose3D.transformBy(wpiTransform.inverse())

However, when trying to sanity check this, I came across a confusing result:

# Using Tag 1 Position
tagT3D = geometry.Translation3d(15.51, 2.75, 0.46)
tagR3D = geometry.Rotation3d(geometry.Quaternion(0, 0, 0, 1))
tagPose3D = geometry.Pose3d(tagT3D, tagR3D)

# Assume the tag is in front of you with no rotation
tagToCameraTransform = geometry.Transform3d(geometry.Translation3d(-3, -2, -1), geometry.Rotation3d(0, 0, 0))

wpiTransform1 = CoordinateSystem.convert(tagToCameraTransform,
                                         CoordinateSystem.EDN(),
                                         CoordinateSystem.NWU())


cameraPose1 = tagPose3D.transformBy(wpiTransform1.inverse())

wpiTransform2 = CoordinateSystem.convert(tagTransform,
                                         CoordinateSystem.NED(),
                                         CoordinateSystem.NWU())

cameraPose2 = tagPose.transformBy(wpiTransform2.inverse())

# cameraPose1 = Pose3d(Translation3d(x=12.513558, y=0.748026, z=1.462788), Rotation3d(x=3.141593, y=-0.000000, z=3.141593))
# cameraPose2 = Pose3d(Translation3d(x=12.513558, y=0.748026, z=1.462788), Rotation3d(x=0.000000, y=-1.570796, z=0.000000))

Intuitively, I would think that the end result should give you Poses with different Translation3d’s as you are converting between two different Coordinate Systems, so each initial X, Y, Z transform component should be added/subtracted to a different X, Y, Z component of the tag’s pose, but that doesn’t seem to be the case. This also seem to be the case if I define a custom Coordinate system, so I feel that I am missing something important.

I also see that PhotonVision is doing the same math here, but I assume it works because everything is in the same coordinate system. I initially tried this without converting between coordinate sytems without success. Impleming the conversion got me closer to what I expected, but the answer seems to fail if I physically move the camera and try to coorelate the position on a Field2D object. Negating the Y value of the tagToCameraTransform prior to converting coordinate systems seemed to get me the correct physical displacement when printing out the answer, but this seems to break the Field2D visualization as a result (I tried brute-forcing a sign change of the rotations prior to converting it as well without success).

I am currently using RobotPy to do this, but RobotPy should just be wrapping the WPILibMath code. I also tested using the WPILib Java and got the same result.

2 Likes

We deal with converting from the Umich Apriltag library coordinates into opencv coordinates in mathutils, then opencv to wpilib in the same. Might be worth taking a look? photonvision/MathUtils.java at e58c27caa28ef35891f1c8a3ec267a87622432f1 · PhotonVision/photonvision · GitHub

1 Like

Thanks @SLAB-Mr.Thomas for reminding me about this post. For reference if anyone else wants the answer to calculate the robot’s position from an AprilTag.

# Position of the Camera relative to the robot's center. For simplicity, assume it is the same.
robotToCamera = Transform3d(Translation3d(), Rotation3d(0, 0, 0))

# Using Tag 1 Position
tagT3D = geometry.Translation3d(15.51, 2.75, 0.46)
tagR3D = geometry.Rotation3d(geometry.Quaternion(0, 0, 0, 1))
tagPose3D = geometry.Pose3d(tagT3D, tagR3D)

# Assume the tag is in front of you with no rotation
tagToCameraTransform = geometry.Transform3d(geometry.Translation3d(-3, -2, -1), geometry.Rotation3d(0, 0, 0))

wpiTranslation = CoordinateSystem.convert(tagToCameraTransform.translation().rotateBy(tagToCameraTransform.inverse().rotation()),
                                          CoordinateSystem.EDN(),
                                          CoordinateSystem.NWU())

robotPose = tagPose.transformBy(Transform3d(wpiTranslation, Rotation3d())).transformBy(robotToCamera)

You can find an example of how we used this in our AprilTag code on our GitHub here.

2 Likes

With encouragement from OP and others including @veg I finally got a Java working, complete example of finding the robot position based on the WPILib AprilTag vision examples and other WPILib math classes and methods running on the roboRIO and Microsoft HD-3000 LifeCam. By working I mean for a few test cases the robot displays in AdvantageScope correctly.

It includes working for an AprilTag at a rotation on the field other than 0 or 180 degrees. (I deployed a hacked tag layout file for testing.)

It includes drawing a rectangular solid around the tag display (I had to use OpenCV solvePNP for best results and I’m not sure why because it’s within 2 to 4 degrees rotation of the WPILib estimator).

I had to make two mysterious rotations of the robot in the code that I can’t defend except that it works. Anyone who cares to take a look might share some insight. The extra rotations might be because of the way the camera is facing the tag and I see mention in Internet articles that in reality there are negative distances and upside down images.

One WPILib method is faulty causing significant debugging time. I opened an issue in github WPILIBSUITE and thanks to @Quake (I think) for a good fix that I use in my code.

My code is a bit messy and part of a larger testing project and is currently at:

2 Likes

File has been updated to correct errors in drawing the 3-D orientation box in front of the tag.

Another post extends the WPILib Java example that includes the robot pose discussed herein and an orientation box on the face of the tag: WPILib robot pose example.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.