Hey,
I’m currently working on April tags pose estimation with multiple tags in cpp on a coprocessor called beelink using an arducam,
Last year I achieved pose estimation only on the best detected tag I didn’t work so well,
how can I achieve pose estimation based on 2 or more april tags to get more accurate results.
I have heard on solvepnp but didn’t know much of it and I didn’t use it.
Basically, you need to build 2 arrays: one has the found corners of the AprilTags (in camera coordinates), the other has the real-world coordinates of those same tag corners. They must match up between the 2 arrays. Then pass them to SolvePnP, along with camera calibration, etc, and it gives you back the results.
Welllll, almost. Translating the returned “rvec” and “tvec” into useful values can be a bit mind-bending.
I would recommend using Photonvision with your camera for pose estimation, it served us well last season for our auto-scoring. We occasionally would lose our pipeline though, so we re-uploaded it before every match just to be sure.
Hey, how can I install photonLib on a coprocessor (windows) it has errors when I run the jar of photon vision an I don’t know how can I download only photonLib and then include it
I reccomend starting here, and reading this page in its entirety: Installation & Setup - PhotonVision Docs. Specifically, read the windows installation section. Follow the steps there for a windows coprocessor. Please make note and be sure to read the following:
OP could give this calibrator a try. It appears to me to be a twin of CALIBDB but better. I converted the CALIBDB author’s github version of pose calib to Java and it’s destined for PV integration. The standalone program works well for now.
We had 3 cameras looking at different angles, using the python libraries, including opencv, apriltags, etc. Used multi-camera calliberation to get coordinates working together, like 3d coordinate transformation using 3x3 matrics. We wrote our custom python code running on a mini-pc as coprocessor. If you wanna look at our code, here’s a link to our github repo for visions
I managed to download and run it do i need to write code for the coprocessor, or do I have all the values in the network tables and then get them through there to my robot code?