WPILIB AprilTagDetector sample code

I did a quick scan on the released WPILib and found several AprilTag related classes, but the JavaDoc is practically useless because their descriptions are basically repeating the name of the methods without any explanation what they actually do and their parameters. I am looking for sample code on how to use these classes and methods but found none. Can somebody point us to sample code calling these classes to create an AprilTag pipeline?


You can reference Photon? Here’s the Apriltag detector call we actually make. Just give it a greyscale image basically photonvision/AprilTagDetectionPipe.java at 86b9d4b037faf9c60bf5a66bf5860bc84168a84f · PhotonVision/photonvision · GitHub

1 Like

I read about Photon. It seems PhotonLib can be run on a PC, the robot or an off-board coprocessor such as Raspberry Pi or Limelight. Is it correct? I assume the link you provided is code for running the pipeline on the robot. Is it correct? How does the performance look like? Is m_detector.detect(in.getMat()) a synchronous call? If so, I should run this pipeline on a separate thread?

PhotonVision is designed to run on a coprocessor. Performance on the Rio is definitely going to be a lot worse than performance on a coprocessor.

Yes, the detect() call is synchronous and the pipeline should be run on a separate thread.

We don’t have any examples yet for WPILib–the classes were written only a couple weeks ago. Essentially the classes are thin wrappers around the AprilRobotics C implementation.

Thanks for the quick response. It sounds like Limelight hasn’t release a downloadable AprilTag pipeline yet and prep’ing a Raspberry Pi may take too long. But we certainly can play with running AprilTag on the Rio while waiting for Limelight’s release?

I am experimenting with running AprilTagDetector on the Rio, but the code is trying to import PhotonVision. I assume I need to install PhotonVision as a vendor lib. If so, where can I find the json for it or download the off-line package for it?

Limelight has released Limelight 3 yesterday, but older models are expected to support AprilTags too. In the meantime, you can install Photo vision on a Limelight.

I finally found the PhotonVision json file and have installed it as a vendor lib. I checked and it’s now in my vendor lib list, but I still get unresolved error.

The code linked above is literally part of the PhotonVision application, and won’t be in a vendor dep. It can serve as a reference for how to use the WPILib classes, but it’s not going to be copy and paste into robot code. Essentially the way to use the classes in a robot program thread is going to be something like (untested):

// set up USB camera capture
CvSink cvSink = CameraServer.getVideo();

// set up AprilTag detector
AprilTagDetector detector = new AprilTagDetector();
AprilTagDetector.Config config = new AprilTagDetector.Config();
// set config parameters, e.g. config.blah = 5;

// Set up Pose Estimator
AprilTagPoseEstimator.Config poseEstConfig = new AprilTagPoseEstimator.Config(...);
AprilTagPoseEstimator estimator = new AprilTagPoseEstimator(poseEstConfig);

Mat mat = new Mat();
Mat graymat = new Mat();

while (!Thread.interrupted()) {
  // grab image from camera
  long time = cvSink.getFrame(mat);
  if (time == 0) {
    continue;  // error getting image

  // convert image to grayscale
  Imgproc.cvtColor(mat, graymat, Imgproc.COLOR_BGR2GRAY);
  // run detection
  for (AprilTagDetection detection : detector.detect(graymat)) {
    // filter by property

    // run pose estimator
    Transform3d pose = poseEstimator.estimate(detection);

You can start from the WPILib intermediate vision example and add the AprilTag bits.

1 Like

So, should I remove the PhotonVision vendor lib then?

Correct, it’s not needed for what you’re doing.

To clarify on PhotonVision: the main PhotonVision software runs on a coprocessor, like a Raspberry Pi or Limelight. You can flash a Limelight with PhotonVision pretty easily. PhotonLib runs on your robot and grabs the vision data, like the tag position, for you. It’s similar to the NetworkTables interface Limelight has, but it comes with some utilities to make vision processing easier.


I was going to start a separate thread, then saw this one. Piling on to Peter’s posting, this is a complete example adapted from the Intermediate Vision example (except that Peter’s code has the pose estimation). Create an template Java project, and paste this over the generated Robot.java:

// Copyright (c) FIRST and other WPILib contributors.
// Open Source Software; you can modify and/or share it under the terms of
// the WPILib BSD license file in the root directory of this project.

package frc.robot;

import java.util.ArrayList;

import org.opencv.core.Mat;
import org.opencv.core.Point;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;

import edu.wpi.first.apriltag.AprilTagDetection;
import edu.wpi.first.apriltag.AprilTagDetector;
import edu.wpi.first.cameraserver.CameraServer;
import edu.wpi.first.cscore.CvSink;
import edu.wpi.first.cscore.CvSource;
import edu.wpi.first.cscore.UsbCamera;
import edu.wpi.first.wpilibj.TimedRobot;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;

 * The VM is configured to automatically run this class, and to call the functions corresponding to
 * each mode, as described in the TimedRobot documentation. If you change the name of this class or
 * the package after creating this project, you must also update the build.gradle file in the
 * project.
public class Robot extends TimedRobot {
   * This function is run when the robot is first started up and should be used for any
   * initialization code.
  public void robotInit() {
    Thread visionThread = new Thread(() -> apriltagVisionThreadProc());

  public void robotPeriodic() {}

  public void autonomousInit() {}

  public void autonomousPeriodic() {}

  public void teleopInit() {}

  public void teleopPeriodic() {}

  public void disabledInit() {}

  public void disabledPeriodic() {}

  public void testInit() {}

  public void testPeriodic() {}

  public void simulationInit() {}

  public void simulationPeriodic() {}

  void apriltagVisionThreadProc() {
    AprilTagDetector detector = new AprilTagDetector();
    detector.addFamily("tag16h5", 0);
    // Get the UsbCamera from CameraServer
    UsbCamera camera = CameraServer.startAutomaticCapture();
    // Set the resolution
    camera.setResolution(640, 480);

    // Get a CvSink. This will capture Mats from the camera
    CvSink cvSink = CameraServer.getVideo();
    // Setup a CvSource. This will send images back to the Dashboard
    CvSource outputStream = CameraServer.putVideo("detect", 640, 480);

    // Mats are very memory expensive. Lets reuse this Mat.
    Mat mat = new Mat();
    Mat grayMat = new Mat();
    ArrayList<Integer> tags = new ArrayList<>();

    Scalar outlineColor = new Scalar(0, 255, 0);
    Scalar xColor = new Scalar(0, 0, 255);

    // This cannot be 'true'. The program will never exit if it is. This
    // lets the robot stop this thread when restarting robot code or
    // deploying.
    while (!Thread.interrupted()) {
      // Tell the CvSink to grab a frame from the camera and put it
      // in the source mat.  If there is an error notify the output.
      if (cvSink.grabFrame(mat) == 0) {
        // Send the output the error.
        // skip the rest of the current iteration

      Imgproc.cvtColor(mat, grayMat, Imgproc.COLOR_RGB2GRAY);

      AprilTagDetection[] detections = detector.detect(grayMat);
      for (AprilTagDetection detection : detections) {

        for (var i = 0; i <= 3; i++) {
          var j = (i + 1) % 4;
          var pt1 = new Point(detection.getCornerX(i), detection.getCornerY(i));
          var pt2 = new Point(detection.getCornerX(j), detection.getCornerY(j));
          Imgproc.line(mat, pt1, pt2, outlineColor, 2);

        var cx = detection.getCenterX();
        var cy = detection.getCenterY();
        var ll = 10;
        Imgproc.line(mat, new Point(cx - ll, cy), new Point(cx + ll, cy), xColor, 2);
        Imgproc.line(mat, new Point(cx, cy - ll), new Point(cx, cy + ll), xColor, 2);
        Imgproc.putText(mat, Integer.toString(detection.getId()), new Point (cx + ll, cy), Imgproc.FONT_HERSHEY_SIMPLEX, 1, xColor, 3);

      SmartDashboard.putString("tag", tags.toString());
      // Give the output stream a new image to display



Nice! I would potentially consider adding hamming and detection margin checks as well – 16h5 has lots of false positive results ime

This is great! With Matt’s recommended changes, I think we could include this in the next WPILib release. I know pose estimation needs camera parameters–does anyone have them for a LifeCam?

I do! At 1280, if that’s helpful? I can recalibrate at any resolution you’d like tho

The above code is 640x480, so let’s go with that. Thanks!

Oh sweet I already ahve that! Lifecam 640x480 photonvision/lifecam480p.json at master · PhotonVision/photonvision · GitHub

Deserialized into a JsonMat

  "resolution": {
    "width": 640.0,
    "height": 480.0
  "cameraIntrinsics": {
    "rows": 3,
    "cols": 3,
    "type": 6,
    "data": [
  "cameraExtrinsics": {
    "rows": 1,
    "cols": 5,
    "type": 6,
    "data": [
1 Like

Hmm, what the AprilTag pose detector wants is fx, fy, cx, and cy parameters. What method does PV use to derive these from the camera calibration data?

Hamming check is in there; it’s not taking anything with any error bits.

Has anyway come up with a quick way to determine the camera parameters? The PhotonVision calibrator works, but you need to have PhotoVision, and know where in the instrinsics to pull fx, fy, cx, cy from.

If no one has a simple procedure, I could put one together, probably involving a ruler.

I was going to put in a Enhancement Issue to add this code to the WPILIB samples; if you are already headed that way, I’ll save myself the time.

1 Like