Pose estimation without gyro feedback

With the introduction of Apriltags, I thought that it would be possible to figure out the robots location based on the locations of the Apriltags and triangulating my position from it. However I can’t seem to be able to solve it as the degrees off for me is relative to the robots rotation, which I don’t know. Is it possible to do pose estimation with two AprilTag’s without feedback from a gyro? My idea come from trying to see the intersection between two lines but I’m lost in where to go to convert it from robot centric to field centric without a sensor.

1 Like

Im not sure I completely understand what your asking. Could you maybe draw it?

2 Likes

Yep, It is. You will have to use some form of pose estimation technique. Probably the easiest is to use the camera to target 3d transform of the 2 tags (most likely provided by your apriltag library) and use trilateration Trilateration - Wikipedia off of that (Think law of sines). Another strategy that we are doing is to run the OpenCV SolvePnp algorithm on all corners of all visible apriltags, this would give you your pose and your rotation, (Though that is a lot more difficult to implement). Feel free to message me with any questions.

You only need to see 1 AprilTag to get a complete pose. That is kind of the point. Of course, the error on the computed pose depends on a lot of things, especially how close the tag is to the robot. If you see 2 tags in the same image, you can get a much better solution.

In the simple case, the software finds the 4 corners and the ID of the tag in the image. The ID should tell you the physical location of that tag on the field (there would need to be a provided lookup table with this info). 4 image points with 4 matching physical locations is enough to define the pose. Look up Perspective-n-Point or solvePnP (routine in OpenCV). Note that if you have 2 tags in an image, solvePnP() can use all of them at once and give a better solution.

The one caveat is that with only 4 points that are in a plane, there are possibly 2 solutions to the pose. Picking which is correct could be done in a few ways.

2 Likes

If anyone is curious, I have written a wicked fast trilateration algorithm that uses all the linear algebra tricks. I’ll push it to github and post it here if people would like.

5 Likes

How could you pick which is the correct one? Is it the one that is closest to the current position from odometry?

I think that is a bit a research question. Depending on the calculation method, you will get some measures of the quality of the solution. So if there were a large difference in quality, that would be a good filter. Otherwise, yes, closest to the current estimated position would be reasonable choice.

With the advent of AprilTags, check out this post our team put out a few days ago. As far as we know, this is the only published code that allows for full pose estimation using just the Apriltags. Currently, it can only find the robot’s translation, but we hope to add a rotation feature to calibrate your gyro.

It has been tested on our KOP drivetrain (no gyroscope) and is able to localize the robot fairly accurately but could be improved with more filtering.

Something that might be of interest:

Photon is currently working through some classes to provide heuristics to disambiguate observations. The exact “best” answer will depend a lot on how much you trust your wheel odometry, how tags are spread on the field, how you have your cameras on your robot set up, etc… but hopefully a couple of the common answers will be supported in libraries.