New NetworkTables Key “botpose_wpired” - botpose, but with the origin at the right-hand side of the driverstation on the red side of the field.
New NetworkTables Key “botpose_wpiblue” - botpose, but with the origin at the right-hand side of the driverstation on the blue side of the field.
New Json arrays - botpose_wpired, and botpose_wpiblue
All of the above botposes are compatible with WPILib. They are also listed directly in the field-space visualizer.
Easier access to 3D Data (Breaking Changes)
RobotPose in TargetSpace is arguably the most useful data coming out of Limelight OS with respect to AprilTags. Using this alone, you can perfectly align a drivetrain with an AprilTag on the field. Until now, this data has been buried in the JSON dump. In 2023.2, all 3D data for the primary in-view AprilTag is accessible over NT.
NetworkTables Key “campose” is now “camerapose_targetspace”
NetworkTables Key “targetpose” is now “targetpose_cameraspace”
New NetworkTables Key - “targetpose_robotspace”
New NetworkTables Key - “botpose_targetspace”
The documentation has been updated to reflect these changes.
Neural Net Upload
Upload teachable machine models to the Limelight Classifier Pipeline. Make sure they are Tensorflow Lite EdgeTPU compatible models. Upload .tflite and .txt label files separately. Intro to teachable machine
We do have an object model coming, but you are now free to train and upload your own detection models as well.
For advanced users:
For now, any custom models should have a 300x300 input dimension and should be based on SSD MobileNet V2 or SSD MobileNet V1. The default model that ships with LL is the EdgeTPU version of SSD MobileNet V2 (tf 1.0). Advanced users can check out the Google Colab Projects listed here
So I could be wrong, but I was under the understanding that all origins from a wpilib stand-point had the origin on the right hand side of the field, closest corner of the field.
This happens to work from the blue alliance side of the field. But if the red alliance side is as you say it is (with the possibility of having -y values where the human player station is), it doesn’t align with wpilib or path planner origins. Is there any way we can have this align, or am incorrect of my characterization of path planner and wpilib.
We’re experimenting with the Botpose, and we find that when the distance increases to about half-field, the pose estimates “dance” quite a bit - meaning - the X,Y,Z values change more than, say, 0.1m rapidly, and the bot appears in different parts of the field.
Any suggestions on make the pose estimates more stable? We use Limelight 2+.
I can share our current best settings if that would be helpful.
What resolution, downscaling, and capture settings are you using? The LL2+ has a fairly low maximum useable range at settings below 960x480 (or whatever the resolution setting around there is) and a downscaling of 2. At lower resolutions, the tag starts getting lost around 15ft or less.