I just need to detect AprilTags through code, using an HD webcam. I’m not worried about field positions of tags or autonomous yet, just need a small guide/tutorial to AprilTags in Java.
I’m aware that WPILib (java) has an a class for AprilTags but there is not much documentation I could find about it.
If there is another thread that has solved this, please let me know.
Any help is appreciated :>
There’s a library for the Rio–see the AprilTagDetector etc classes. See the link in the post above yours for my quick attempt at sample code. There is no broader guide at present.
The AprilTagDetector works very well for locating tags in the image based on that example code!
I’m fumbling with the AprilTagPoseEstimator.Config to get decent values for the position, and one quirk in there is that the estimated poses use a coordinate system where “z” is the distance from the camera to the tag, “x” is to the right when looking like the camera does, and “y” is down. The usual robot coordinates are “x” going forward, “y” to the left and “z” up. I can construct a Translation3d robot = new Translation3d(tag.getZ(), -tag.getX(), -tag.getY()), but wanted to use a Rotation2D and my brain hurts trying to come up with the appropriate yaw/roll/pitch…
We did have to change the default settings as they let in way too much noise / incorrect readings. minClusterPixels was increased from 5 to 250. 5 pixels is far too little to detect an AprilTag. criticalAngle was increased from 10 degrees to 50 degrees to make sure we detect things that are mostly square.
We were able to get 7-11 fps at 480p, which is excellent to stop, detect, and align.
Keep in mind our config values were picked very quickly and spending a little more time can give you better results.
Sorry for the late response, i have looked through the vision processing guide on the WPILib docs. The docs seem to be using a TimedRobot code template.
Our team is using the Command-Based template and we were wondering if making the Vision Processing/Camera as a subsystem, and calling it’s functions in robotInit()
Our team isn’t able to get our Robot up and running, but I just wanted to know if this code would work. Theoretically it should, but the docs didn’t say anything about putting the code in a Subsystem class.
You can structure the code however you want. Because the vision processing happens in a separate thread, it runs on its own and isn’t affected by your robot code by default.
You will need a way to get data from the newly created vision thread to your robot code. The code I linked does not contain that. As a simple solution, you can put whatever you want onto networktables from the vision thread.