Did your team need to do anything special or any tricks to get pathplanner to work? How much time did you spend tuning your module gains, your drivebase gains and your pathplanner gains?
Not my robot but something i am seeing more and more of is pose lerping.
The constant coefficient seems to make a world of difference. We even had an awesome contributer add one to YAGSL this week and it worked significantly better. I am pretty amazed at this but it seems like a layer of tuning most teams never get to.
The one thing that made the biggest difference for us was switching from a custom swerve logic to using a swerve library (YAGSL). I believe their odometry measurements were a lot better than our logic, especially at longer drive distances (like center field). It also tracked our position a lot better through a full match.
One thing that is very important and is missing from the above posts is data.
How can you be sure your autonomous path following is better without recording actual vs measured end poses, graphs of wheel velocity and acceleration, etc. along with only changing a single variable between runs?
Id be surprised that just switching to yagsl or other pre built library is likely to improve your odom without changing other constants. There is likely some other tuning handle you are abstracting away when using a library like that.
Can you explain a bit (lot) more?
When trying to calculate your current pose, the pose estimator does an excellent job to a degree (again you can tune this but most teams dont). Teams should tune this if you’re using vision to help odometry. To tune vision it’s mostly guess and check on the stddevs with a scale depending on distance. Some teams experience better stability by filtering out all estimated poses X distance away and such.
After that your odometry period can improve your data even further, but this is mostly limited to the CAN bus rate and packet delivery speeds. So this is massively better with CTRE Swerve.
After that you can tune discretization to compensate for the system delay.
After that you can tune your angular velocity coefficient to compensate for gyro system delay (not as necessary when you have a pigeon on a canivore).
Then finally you can add ANOTHER coefficient like 4481 does to compensate for your code delay.
All of these are individual tuning steps which build ontop of each other. If you mess with a lower step all the higher steps will be out of sync, and if you need to replace a part like your gyro you may need to redo all of it anyways.
This is my interpretation of it anyways. I am sure there’s a math heavy explanation in here.
Is it like a “derivate” term of the drive PID that uses chassis speeds measured from the encoders?
I will bookmark it and see how to put it in our code.
Is this compensation factor different from just multiplying the velocity by delta time? Because from what I’m hearing it sounds like the factor is compensating for the finite speed of the PoseEstimator by factoring in the time between frames, like movement lag compensation in games. I don’t know how accurate WPILib’s methods are, but could you use Timer.getFPGATimestamp()
updated every periodic()
run to get the approximate code period and use that as your compensation factor?
It is not different from that at all, infact it becomes even more explicit about that in the 2025 update