Limelight 3 April tags

Does anyone have a good tutorial on how to implement apriltags into your odometry and pose 2d?
Any response is appreciated thanks :blush:

My advice, for any technical challenge like this, would be to first describe what you want the robot to do in words. In this case it would look something like this:

  • I would like the robot to automatically align to the AprilTag targets.

From there, I would break this down into the individual steps in the process, because the current description might leave you scratching your head on where to start.

  • First, I want the robot to see the AprilTags and get that data.
  • Then, I want the robot to generate a trajectory to the closest tag in view when I press a button.

Above, you can start to imagine what kind of steps you will have to take to get the end result. Repeat this process until you get to steps that you either feel comfortable implementing OR you are able to find the resources you need to learn how to implement them. Example expansion below:

  • Access AprilTag data from Limelight on robot and visualize it to SmartDashboard and use it in code.
  • Generate a Trajectory (How do I do that?!)
  • Follow that trajectory (How do I do THAT!?)
  • Report when done and do something else like move arm to scoring position

… expand to final points …

  • Create an interface that takes in Limelight pose data (Finally, something I am comfortable with!)
  • Install PathPlanner and read docs on the GitHub to learn to make paths in code
  • Store AprilTag poses in code
  • Create a method that makes a trajectory from the current robot position to end position
  • Send that trajectory to a path following command (i.e. PPSwerveController)
  • Report when done
  • Put in sequential command group to move arm when path is finished

That list above still will seem scary for someone new to vision tracking and trajectory generation. However, there will always be things that us programmers won’t know how to do! Google is your best friend. Also, the majority of the things in my “more detailed” list can be answered on the setup guides for both PathPlanner and on the Limelight setup guide.

1 Like

Photon vision, camera wrapper, poseEstimator.addVisionMeasurement() it’s a bit more complicated then that but rely on photon visions documentation and copy an example you want for how to integrate it like this one GitHub - NE-Robotics/Java-Framework: An opensource FRC java framework to make blending simulated & real robot code a breeze since as every programmer knows you will always get the bot last, always :)

From there you now have your robot pose how you want to handle from there is up to you but you can basically treat everything like an auto if you want