Limelight field pose estimation discrepancy

We’re working on the pose estimation from Apriltags using Limelight (LL2+).

So far we found that the botpose estimation provided by the Limelight software differs from the direct measurement. The difference is somewhat (non-linearly) proportional to the distance from the Apriltag, and at the “edge” of the consistent reading (about 3.5m) the discrepancy can be as high as 0.3m. A discrepancy appears to be increasing as the distance grows, so it may be possible to account for it. I am just wondering why it was not done in software.

So, the questions I have:

  1. Did anyone else tried to actually confirm the botpose measurements (actual vs Limelight-calculated)?
  2. What are your LL settings that provide most reliability for LL detection at greater distances? In our case we found that Blue/Red balance, for instance, makes a HUGE difference in reliable detection.
  3. Do you think that Photonvision loaded on LL would do better job, or in your opinion do you feel that Photonvision just doing better detection/botpose calculations?
    Thanks.

What resolution are you running at? Higher resolutions will typically lead to better accuracy at a distance. Also, have you run calibration on your camera? I don’t think photon vision would make any noticeable difference, as you are still using the same camera hardware and april tag detection. Having more than 1 tag in view should also assist in stabilizing your botpose: Robot Localization with MegaTag | Limelight Documentation.

1 Like

Thank you for suggestion! Did not think about the calibration. Definitely will try it.

Re: same camera - I would run PhotonVision on Pi with a different camera.

The problem with higher resolution is lag, but I will try it.