Limelight 2023.2 - Easier 3D, Neural Net Upload

Making 3D easier than ever.

WPILib-compatible Botposes

Botpose is now even easier to use out-of-the-box.

  • New NetworkTables Key “botpose_wpired” - botpose, but with the origin at the right-hand side of the driverstation on the red side of the field.
  • New NetworkTables Key “botpose_wpiblue” - botpose, but with the origin at the right-hand side of the driverstation on the blue side of the field.
  • New Json arrays - botpose_wpired, and botpose_wpiblue

All of the above botposes are compatible with WPILib. They are also listed directly in the field-space visualizer.


Easier access to 3D Data (Breaking Changes)

RobotPose in TargetSpace is arguably the most useful data coming out of Limelight OS with respect to AprilTags. Using this alone, you can perfectly align a drivetrain with an AprilTag on the field. Until now, this data has been buried in the JSON dump. In 2023.2, all 3D data for the primary in-view AprilTag is accessible over NT.

  • NetworkTables Key “campose” is now “camerapose_targetspace”
  • NetworkTables Key “targetpose” is now “targetpose_cameraspace”
  • New NetworkTables Key - “targetpose_robotspace”
  • New NetworkTables Key - “botpose_targetspace”

The documentation has been updated to reflect these changes.

Neural Net Upload

Upload teachable machine models to the Limelight Classifier Pipeline. Make sure they are Tensorflow Lite EdgeTPU compatible models. Upload .tflite and .txt label files separately.
Intro to teachable machine

We do have an object model coming, but you are now free to train and upload your own detection models as well.

For advanced users:
For now, any custom models should have a 300x300 input dimension and should be based on SSD MobileNet V2 or SSD MobileNet V1. The default model that ships with LL is the EdgeTPU version of SSD MobileNet V2 (tf 1.0). Advanced users can check out the Google Colab Projects listed here

Pinging some of the users who requested these features: @mjansen4857 @mray190 @Bmongar @UnofficialForth @randomstring


So I could be wrong, but I was under the understanding that all origins from a wpilib stand-point had the origin on the right hand side of the field, closest corner of the field.

This happens to work from the blue alliance side of the field. But if the red alliance side is as you say it is (with the possibility of having -y values where the human player station is), it doesn’t align with wpilib or path planner origins. Is there any way we can have this align, or am incorrect of my characterization of path planner and wpilib.

1 Like

I didn’t explain that clearly enough, but basically you will never have negative x or y values.

Wpiblue and wpired match this image from the wpilib docs exactly

1 Like

We’re experimenting with the Botpose, and we find that when the distance increases to about half-field, the pose estimates “dance” quite a bit - meaning - the X,Y,Z values change more than, say, 0.1m rapidly, and the bot appears in different parts of the field.
Any suggestions on make the pose estimates more stable? We use Limelight 2+.
I can share our current best settings if that would be helpful.

1 Like

I think this post might have some information you are looking for. I Hope it helps!


Yes, we did all that already :slight_smile: And still lose the pose.

I can record the vide how it looks like when that happens, if that will help.

So, any suggestions will be much appreciated.

What resolution, downscaling, and capture settings are you using? The LL2+ has a fairly low maximum useable range at settings below 960x480 (or whatever the resolution setting around there is) and a downscaling of 2. At lower resolutions, the tag starts getting lost around 15ft or less.

1 Like

I set the resolution to the highest one there (I think 1024/768 or greater)

I’m seeing a fair amount of 60Hz flicker on both the LL2+ and LL3 if the exposure is turned below ~800. Is there a way this can be mitigated through the use of anti-flicker settings in the camera?