Hello!
My team has gotten a limelight 3 and are trying to use it to get its position on the field using April Tags. We want to be able to consistently get the robot’s position from behind the charging station. I’ve tried tuning it to fix this problem, but haven’t been able to get it to work that well. Here is the problem I’m experiencing:

How do you suggest I tune it to maximize accuracy and fps at this distance? Also, what fps do you think we should be aiming for in general?
1 Like
If this is a live recording, make sure ur April tags are all positioned in the right order with correct dimensions
It is just looking at AprilTag 1, and the dimensions are 152.4 mm.
1 Like
Can you share a snapshot of ur video output?
I would start by increasing the capture resolution on your LL to 960p.
Your Limelight should be mounted above or below tag height and angled up/down. Your target should look as trapezoidal as possible from your camera’s perspective. You don’t want your camera to be perfectly completely “head-on” with a tag if you want to avoid tag flipping at long range.
Have you seen the tips listed here?
https://docs.limelightvision.io/en/latest/apriltags_in_2d.html#tips
For ideal tracking, consider the following:
- Your tags should be as flat as possible.
- Your Limelight should be mounted above or below tag height and angled up/down. Your target should look as trapezoidal as possible from your camera’s perspective. You don’t want your camera ever to be completely “head-on” with a tag if you want to avoid tag flipping.
There is an interplay between the following variables for AprilTag Tracking:
-
Increasing capture resolution will always increase 3D accuracy and increase 3d stability. This will also reduce the rate of ambiguity flipping from most perspectives. It will usually increase range. This will reduce pipeline framerate.
-
Increasing detector downscale will always increase pipeline framerate. It will decrease effective range, but in some cases this may be negligible. It will not affect 3D accuracy, 3D stability, or decoding accuracy.
-
Reducing exposure will always improve motion-blur resilience. This is actually really easy to observe. This may reduce range.
-
Reducing the brightness and contrast of the image will generally improve pipeline framerate and reduce range.
-
Increasing Sensor gain allows you to increase brightness without increasing exposure. It may reduce 3D stability, and it may reduce tracking stability.
Also, you’ll find that stability dramatically improves with more than one target in view at long range.
9 Likes