The latest version of LimelightLib includes a new PoseEstimate.timestampSeconds and the latest documentation from Limelight sends that into WPI’s pose estimator.
The documentation from the CTRE Swerve says you must use a timestamp with an epoch since the FPGA startup.
And the current example just sends in Timer.getFPGATimestamp().
Based on what I can see on how LimelightLib’s timestampSeconds works, it uses NetworkTableEntry.getLastChange() and adds latency. I think this is also based on Epoch time and therefore would be safe to pass to the CTRE vision logic.
I think the code you got from @Brandon_Hjelstrom Limelight release needs to be viewed with MASSIVE disclaimers. In general many of the “simple” things the Limelight produces just work and they work consistently. This incorrectly leads many people to believe when they see code like this that because of reputation that this code will also “just work” when the results really aren’t there.
Blindly trusting any pose that has 2 or more tags has proven (as I expected) very inconsistent often passing poses off by a meter or more.
I encourage people that are trying this to plot the pose of the measurements this is passing on your Field2D and you will see how rough at times they can be.
If you are sitting not moving in front of the speaker sure this is fine, but for anything more complicated I would look into doing significantly more tuning.
You say “CTRE vision logic” and someone from CTRE can correct me if I am wrong but I don’t believe there is any logic being done by CTRE to affect vision.
I am aware of the challenges with updating odometry based on vision. I was just interested in the seeming easy way of including latency in the calculation.
Yes prior to the new LimelightLib changes teams were supposed to use (from the LL site) Timer.getFPGATimestamp() - (tl/1000.0) - (cl/1000.0) now it is done for you in the Helper
To answer my own question, SwerveDrivetrain from CTRE includes constructors that will set Odometry and Vision Standard Deviation. The default constructor sets them to be the following respectively:
VecBuilder.fill(0.1, 0.1, 0.1),
VecBuilder.fill(0.9, 0.9, 0.9),
I think this means it will trust the vision data regarding the rotation of the robot. If you wanted to adjust this to be what is in the Limelight documentation you need to change the constructor of your drivetrain class to set these how you want.
Now I need to read more about what these actually do