Trying to learn how to code April Tags for FRC

I am new to coding the April tags and would love all the help I can get. The problem I am facing with the code I haven’t written yet is that I cannot figure out how to code. Where the camera detects the April tag, and then, at a specific distance, the robot shooting system turns on.

1 Like

What camera are you using? If you are using Limelight, its API does like 95% of the work.

1 Like

I will be using the limelight, but just in case I would like to know how to use a USB webcam to. But I would prioritize the code to use limelight, but I still don’t know how to code the limelight.

As mentioned previously, the easiest way to use apriltags is to use limelight. If you use a USB camera, you should need a compressor running something like photonvision with it. For your use case, it would most likely be best to use a limelight. In terms of using auto align, the easiest method is to just use the limelights tx value, which gets the angle to the apriltag, which you can use to align your drivetrain. For a shooter pivot or similar mechanism, you would get the limelight’s reported distance to the tag and adust your angle based off of this.

Is it possible, to see an example code for the limelight?

The limelight examples might be helpful to look at. What drivetrain do you use?

If tank, then the limelight examples actually pretty good method of doing this: Aiming With Visual Servoing | Limelight Documentation

They also have an example for swerve, although a bit more complicated: Aiming and Ranging With Swerve | Limelight Documentation

You can look at our code for it as well. It’s posted on our github:

However, we go slightly more complex and use a pose estimator as well as individual tag measurements to get the highest accuracy and best tracking possible. This is a more time-consuming and complicated method of using apriltags, and I would definitely implement the limelight tutorials first.

Here’s a “complete” example of using WPILib on a roboRIO for AprilTag detection and pose calculation. It’s from last year so I don’t know if some WPILib names are the same or need a little adjustment. It runs on a roboRIO with very little time left for other processing. It might be of some use on a roboRIO v2. It’s slow thus why everyone uses a co-processor of some sort. I doubt this would be used “just in case” but it does give some hint of what goes on in LL and PV. If you don’t use a Microsoft LifeCam then you have to get the camera calibration from somewhere. I think, most teams use LL or PV, for good reason.

Okay, thank you.

1 Like