How to detect AprilTags in Java (using a logitech webcam). Sample code pls

I just need to detect AprilTags through code, using an HD webcam. I’m not worried about field positions of tags or autonomous yet, just need a small guide/tutorial to AprilTags in Java.

I’m aware that WPILib (java) has an a class for AprilTags but there is not much documentation I could find about it.

If there is another thread that has solved this, please let me know.
Any help is appreciated :>

1 Like
1 Like

See WPILIB AprilTagDetector sample code - #9 by Peter_Johnson for example code

2 Likes

It looks like photon uses a co-processor, is there a library/API/ guide to vision processing on the RIO itself?

I’m aware that doing so on a RIO is slow, but I am fine with that

There’s a library for the Rio–see the AprilTagDetector etc classes. See the link in the post above yours for my quick attempt at sample code. There is no broader guide at present.

1 Like

The AprilTagDetector works very well for locating tags in the image based on that example code!
I’m fumbling with the AprilTagPoseEstimator.Config to get decent values for the position, and one quirk in there is that the estimated poses use a coordinate system where “z” is the distance from the camera to the tag, “x” is to the right when looking like the camera does, and “y” is down. The usual robot coordinates are “x” going forward, “y” to the left and “z” up. I can construct a
Translation3d robot = new Translation3d(tag.getZ(), -tag.getX(), -tag.getY()), but wanted to use a Rotation2D and my brain hurts trying to come up with the appropriate yaw/roll/pitch…

There’s a Ri3D Redux video coming that shows off doing Apriltag detection on a Rio.

We have example code at Robot2023-Simple/Robot.java at f90a583bcfaa8e3c579e253f3bc003cb2dfba665 · Ri3DRedux/Robot2023-Simple · GitHub

We did have to change the default settings as they let in way too much noise / incorrect readings.
minClusterPixels was increased from 5 to 250. 5 pixels is far too little to detect an AprilTag.
criticalAngle was increased from 10 degrees to 50 degrees to make sure we detect things that are mostly square.

We were able to get 7-11 fps at 480p, which is excellent to stop, detect, and align.

Keep in mind our config values were picked very quickly and spending a little more time can give you better results.

4 Likes

Sorry for the late response, i have looked through the vision processing guide on the WPILib docs. The docs seem to be using a TimedRobot code template.

Our team is using the Command-Based template and we were wondering if making the Vision Processing/Camera as a subsystem, and calling it’s functions in robotInit()

Robot
RobotContainer
Camera/Vision processing subsystem

Our team isn’t able to get our Robot up and running, but I just wanted to know if this code would work. Theoretically it should, but the docs didn’t say anything about putting the code in a Subsystem class.

Ty for your help :>

You can structure the code however you want. Because the vision processing happens in a separate thread, it runs on its own and isn’t affected by your robot code by default.

You will need a way to get data from the newly created vision thread to your robot code. The code I linked does not contain that. As a simple solution, you can put whatever you want onto networktables from the vision thread.

2 Likes

Thanks for the swift reply!
And yes we do plan to use NetworkTables, I’m still reading the docs and absorbing as much info as I can.
:>

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.