Ideal AprilTag vision system

I am currently developing an AprilTag vision system for pose estimation that I want to release and make open source soon. Before I put any finishing touches on it before releasing it, I wanted to see what features and capabilities the community wants out of an ideal vision system. If you have any features you want to the system to have please post them here.

Some details on the vision system so far:
It’s written in rust
It uses OpenCV for AprilTag detection and pose estimation
It is based off of 6328’s vision system Northstar

(Name suggestions also welcome)


As someone who is not a programmer, and assuming that this is Photonvision-based:

  1. Allow easy access to the field tag location json
  2. Make the only required parameters for setup be an IP address for Photonvision and whatever minimum setup code there is
  3. Make it possible to run 1 coprocessor easily, or multiple with some additional setup
  4. Allow rejection or weighting of spurious tags based on ambiguity, ID, and distance
  5. Make it easy to modify the core libraries as WPIlib and Photonvision change.
  6. Simulation integration
  7. Multi tag PnP
1 Like
  1. Real-time camera-to-robot position transform back to your system from the roboRIO via NetworkTables.
  2. Pose output also formatted for direct input to AdvantageScope 3D Field.
  3. Make sure we don’t care if it’s written in Rust.
  4. User’s Guide Documentation

Make it compatible with FTC and FRC. Thus it must be to upload targeting data via Ethernet to NT, and the same data via I2C in either serial, or better yet, JSON.