Why did FRC select 16h5 as the Apriltag family of choice?

Does anyone know why 16h5 was selected as the apriltag family this year? I’ve heard 36h11 is more robust to false positives and it seems to be the “default” in every apriltag example I see.

I’d be great if FRC could switch to 36h11 next year since there is a CUDA-based apriltag library. (see: isaac_ros_apriltag GitHub - NVIDIA-ISAAC-ROS/isaac_ros_apriltag: Hardware-accelerated Apriltag detection and pose estimation.
and team 88’s fork: GitHub - frc-88/cuda_apriltag_ros)

Unfortunately, this library only supports 36h11. I’ve searched around and it seems low on NVidia’s priority list to add other tag families.


Paging @marshall

Don’t look at me. I’ve been told I’m wrong and need to shut up.


It was originally 36h11 but FIRST changed it at the last minute


Out of curiosity… can you share the source? Were you involved in the beta?

edit: source: 2022 Control System Reporting, 2023 Updates, and Beta Testing | FIRST

Totally forgot about this


Afaik it was so the rio would be able to run a detection pipeline for the tags and 36h11 was too slow on it. (Not that 16h5 is much better)


Both announcements are on the FIRST blog

1 Like

Did anyone run detection on the RIO? I only saw it ran on co-processors.

Think of the poor companies that poured all that time and effort into supporting the wrong library while the open-source equivalents were following the standard as it was published!


From what I have heard, this is not true. (Or at least was not the main driving factor).

Some teams did, yes.

FIRST is currently collecting feedback from various stakeholders involved (Limelight developers, PhotonVision developers, students/mentors on teams) regarding AprilTag feedback for the season. I’m not sure if the feedback link is supposed to be public or not, but feel free to send me any feedback (I’ve read most of the threads and know the general consensus already, but maybe any unique scenarios or considerations that may be overlooked) and I can send it in with my submission.


Then what was it?

1 Like

There was a followup blog post in November with some reasoning: 2023 Approved Devices, Rules Preview, and Vision Target Update | FIRST


As per a comment on the blog post and the post itself, FIRST’s official reasoning is that they believed having fewer tags that were easier to detect would be better:

As mentioned in the blog, there is an increase in maximum detection distance achieved by switching to the lower resolution tag family. This should allow teams to either detect the tags from further away, or potentially bump down in resolution (decreasing CPU and/or increasing processed framerate). The cost is the increase in false positives from the substantially reduced complexity of the tags. We recommend experimenting with setting a reasonable minimum tag size based on the size at the farthest distance you can accurately detect with your camera (i.e. the default settings of the April Tag library will happily return false positives much smaller than it can reliably detect the actual tag). We also recommend reducing the hamming correction to 1 or 0, while 2 is a good default value for larger tags, with the lower but count of the 16h5 tag a lower value is appropriate. You can also experiment with filtering detections based on the returned “decision margin” or adjusting other parameters of the quad detection. Example code utilizing soe or all of these techniques is expected to be released at Kickoff.

-Kevin O’Connor
FIRST Robotics Competition Sr. Robotics Engineer

1 Like

Yeah, that’s a pretty nice team you got over there, pity if someone registered a rookie team.

Maybe you stop talking about the families and nobody needs to register that rookie.


The best part about this is that someone is going to think it’s serious and I’m going to get an email about it. Even better than losing at euchre.


I’d also like to register a complaint that said rookie team burned my retinas out with their color scheme and absurd theming, and I loved every minute of it.

Re: OP other that what’s in the blog posts is the most info I think anyone in public has. It’s clear that 16h5 works - teams used it to actually score points this year. It’s also clear HQ did some research and experiments that led them to the conclusion.

From what I see it’s a balancing act - I don’t think anyone can argue that more tags is better as long as they remain unique (which might drive requirements on family) and they get placed in the right pose3d (within tolerance) on every field.

That second part is critical though - it’s either an engineering challenge to poka-yolk the field design, or a training challenge to make sure the tags get mounted properly by volunteers.

Optimizing tag size, count, and family is just picking an appropriate tradeoff point between the competing requirements.

Also (to slightly rehash an older post), I strongly believe a requirement HQ was working with was that the tags needed to be a mostly drop-in replacement for retroreflective tape in simple “turn till the target is in the center of the image” algorithms. Many teams proved out that if you do this with a fast enough video stream, it gets you from zero to vision processing very quickly, and limelight made a commercialization strategy about it. Saying that teams had to go to a full field-relative pose estimation solution just to use apriltags probably wasn’t acceptable to HQ. This requirement would drive a lot of the detection distance and placement requirements.


You folks are a riot!! Thanks for the insights. Those reasons make sense. Sounds like I’m constrained to the CPU based apriltags library.

1 Like

Or rewriting/training your own.

1 Like

Train an object detector to find April tags and then just pass those into a April tag processor, leverage the NPU and lighten up the work on the CPU, is that actually feasible?