AprilTag detection with limelight

Hello i have recently started messing around with AprilTags using the newly released limelight software. I have been able to get it to track the tags but have noticed the second that anything small gets in the way of the tag it will no longer pick it up, event if it is the tip of a wire withing its black frame. Would anyone happen to know if there is away to make it ignore minor differences or some tunning that can help, as I can see this being a problem if something was to get in the way of the Limelight.

If you are running it through an openCV pipeline there are ways of blurring images or reducing the resolution of the camera to get less interfering objects. April tags thankfully are low resolution so you don’t need anywhere near 480p to process one.

Try lowering the limelight resolution, but also look at how the AprilTags are identified. (I use them at work but don’t know the inner workings of the library in detail). My best guess is that each square pixel represents a byte of data. If one byte of data is scrambled by an obscuring wire or something you have junk and not a match.

Think of an April tags like a qr code for a website address. If it’s supposed to be for google.com and your wire scrambles the qr images, means it reads goggle.com it’s not a match for your ID. It’s something else and probably not a valid chunk of data. You could dig into how the library works and see if there’s a way to get partial results and try to unscramble it. Best method though is to always give the camera a clean view of the tag to find it. Each pixel of the tag matters!

Using AI like Yolo you could train it to find the April tags by their unique pattern, but you aren’t actually reading them as numbers or getting the skew, scale and position data the official libraries provide. With AI training you label thousands of images with bounding boxes around your objects with purposely shifting angles, scale, obscuring parts of it, etc. This is not really the way to use AprilTags though

Worthwhile to note - the behavior you mention is expected. From the v2 apriltags paper:

image

What Are AprilTags? — FIRST Robotics Competition documentation has the links to four relevant papers at the bottom.

3 Likes

This is simply inherent to Apriltags. There are other tag libraries that can handle tag occlusion but without risky changes in settings, this is just how it is.

1 Like

Funnily enough, something like 30% of a QR code is error correction so that you don’t get “goggle.com” by accident. Something the AprilTags lack.

2 Likes

Thanks everyone for the replies, i really appreciate it.

1 Like

Might be overkill for a problem that can be solved by other means, but could machine learning be used to train “noisy” or partially occluded apriltag images?

If your goal is “best performing robot in 2023” - my short answer is “probably not”.

My reasoning is that I know exactly what an apriltag looks like. Occlusions can vary, but everything in the world is either “apriltag with 0 or more occlusions” or “not an apriltag”.

A good training algorithm wouldn’t just have many samples of occluded apriltags, but rather would include the algorithmic description of what valid patterns of dark/light are that the ML algorithm needs to be looking for.

At that point, there’s so much specific info injected into the training that I question how much the resulting process is actually “machine learning”, versus is actually just “machine being told what to do”.

Since there are many proven ways to process fully-visible apriltags, and many ways to put cameras on the robot such that many apriltags are usually visible, and many good ways to substitute wheel odometry when apriltags aren’t visible, I’d say your time is better spent elsewhere than re-inventing the apriltag wheel with machine learning. At least for the 2023 season.

However

If your goal is “education and discovery”, my short answer is “give it a shot, we’ve got nothing to loose at this point”.

I’d be happy to be proven wrong to my “an optimal method already exists” point. An AI algorithm which detects apriltags under occlusions would be very nifty to see, and be highly educational to any student who undertakes it and succeeds even marginally.

Anyone looking for such an endeavor should definitely do research into the v1 apriltags paper which talks explicitly about how they did occlusion management without AI.

image

Furthermore, the v2 apriltags paper talks about why they abandoned that approach.

Look at the reasoning, compare/contrast it to FRC use cases, and determine whether some particular ML solution is a good fit.

A critical point: the v2 paper talks about “users”. FRC robots, in many many ways, are not always “most users”. From that perspective, I have hope that someone may find a novel, optimal processing algorithm that isn’t in any of the papers yet.

2 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.