Understanding Kalman Filter behavior

Hi all,

I’m remotely helping out a team and trying to use the WPILib KalmanFilter/LinearSystemLoop stuff to control a flywheel. I’m having trouble understanding some of the behavior I’m seeing. In short, it seems that I have to significantly artificially lower the standard deviation of measurement error to elicit any type of feedback response. This is my first time working with a Kalman Filter, so I’m probably missing something obvious. Please be patient with me :slight_smile:

Here’s the code. Some relevant files are:

We started by characterizing the flywheel with sysid. Here is the recorded data (renamed to .txt because CD does not allow uploading .json):
sysid_data20220119-171302.txt (2.7 MB)

We used LinearSystemId::IdentifyVelocitySystem with the reported Kv/Ka values to obtain a LinearSystem and used the Sim RMSE reported by sysid as our model standard deviation for our KalmanFilter. We also manually add sgn(input) * Ks to the input voltage generated by the LinearSystemLoop. Our system state is in rad/s. The physical mechanism is a Falcon 500 with no gearing and all velocity filtering disabled.

We initially made a random guess for measurement stddev of 5 rad/s, figuring that if we guessed too high we would just feedforward a constant voltage and reach a steady state velocity where we could observe a Gaussian measurement error distribution and make a better guess based on that.

Here is what we saw:

Estimated state is flat, so we are only feeding forward as expected. We eyeballed that all the error fell with +/-6 rad/s of the mean and chose a new measurement stddev of 2 rad/s.

Ignore that the estimated state stays flat after measured state falls. This is because we only update the SmartDashboard variable when the robot is enabled.

Confusion started when we tried to fire a ball. Here is what we saw:

Estimated state stays flat despite a measurement error of ~200 rad/s. We began arbitrarily lowering our measurement error stddev until we finally started seeing a pronounced response at 0.1 rad/s.

So we could continue down this path and get acceptable results, but a big draw of using these classes was we wouldn’t need to this sort of manual tuning. The measurement stddev we have to set to get a good response seems totally divorced from the sensor’s actual performance. CDF[Gaussian[0, 0.1], -4] is so small it can’t even be represented in a double, and we regularly see measurements +/-4 from the mean at a steady state velocity, yet the estimate doesn’t budge.

I’ve read through rlabbe’s Kalman and Bayesian Filters in Python but the understanding is still coming to me. One guess I have is that I’m dealing with a smug filter, but I’m not sure how to deal with that in the context of the WPILib classes. Or maybe using Sim RMSE as the model stddev was a mistake, but I’m not sure what the alternative would be.

Any help or suggestions would be tremendously appreciated. Because I’m helping this team out remotely it’s a bit hard to run new tests on the physical mechanism, but I can try to have them do it if needed.

2 Likes

The process noise is also determined by how large of a disturbance you expect to see, so sim RMSE won’t be enough; you need to include speed drops from balls going through as well. Choosing a good value for process noise is more art than science. Here’s what 3512 ended up with last year.

The measurement noise was so low that it was mainly driven by the quadrature encoder resolution limitations. It’s fairly common in FRC to have measurements so accurate relative to the model that the measurements end up being the primary driver.

4 Likes

Pro tip: you’ll get much better graphs if you flush NetworkTables every periodic loop (add NetworkTableInstance::GetDefault().Flush(); at the end of RobotPeriodic in Robot.cpp).

6 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.