Personal Library Development (Improving Upon WPILib)

I wrote this small library for my students because there are many classes and methods in WPILib that I think could be written to have a higher degree of generality. My library can replace those parts of WPILib. Additionally, my odometry and pose estimation algorithms operate in 3D space so that they will function correctly when driving over the charging station. I also employ the algorithm from my paper in this library.

I’m not saying that WPILib is bad, I just think that there are certain parts of it that I can do better. In fact I use WPILib classes in my code. That’s why the development project is an empty WPILib project with the library directory at java\frc\robot.

At the time of posting this, all of my code is completely untested as I do not have immediate access to a robot. There may be errors and I would appreciate feedback.


The Timer class seems a bit bizarre. What does this offer that the WPILib Timer does not?

Notable features of the library include:

  • 3D robot positioning
  • Integral approximation via either a Riemann Sum or a linear interpolation
  • A Timer class which can measure the time between calls or the time since an event (Basically the WPILib Timer but with a “getDT()” method
  • Weighted sensor fusion without a state-space model of the system
  • Flexibility around the measurements used in pose/rotation measurements
  • Where WPILib has a set of odometry classes for different kinds of drivetrains, my library has one odometry class which works for every drivetrain.

I just posted a comment that explains the Timer class, I was a little too late.

I’m still confused; you can just reset the WPILib timer and achieve the same functionality.

I am a little confused, too. What would be the point of using something like the KalmanFilter present in your library over WPILib’s KalmanFilter?

I’ve changed the name from KalmanFilter to WeightedFusion to be more accurate. WPILib’s KalmanFilter relies on a state-space model of the drivetrain whereas this can fuse sensors with or without a system model. I wrote a whitepaper about this method of sensor fusion here. I honestly don’t know why I called the class KalmanFilter, I was just trying to replace the functionality of the KalmanFilter class and I guess I just kept the name.


The points which can be extrapolated from my paper is that advantages to my method include:

  • Generality of application (e.g. my algorithm would work for fusing integrated accelerometer data with drivetrain data to estimate velocity)
  • Additional system parameters
  • Flexibility with measurement standard deviations (e.g. The standard deviation of a vision measurement increases when the robot is further from the vision target, the standard deviation of a vision measurement can be specific per measurement)

Yeah you could do that, I just thought it would be nicer to have a single call to do it instead. It’s really not a big deal; just my preference.

WPILib does everything I do, my code just does it better in my opinion. The Timer is an example where the advantage is close to negligible.

1 Like

I know people can have different API design tastes, but I wanted to point out some parts of WPILib you may have overlooked that should address your perceived lack of features.

There’s a PR open to generalize the WPILib odometry and pose estimator classes to 3D, though some of the tests are failing still. The approach you took uses first-order Euler integration, which isn’t as accurate as using differential geometry with position deltas. WPILib’s 2D pose estimator does this already, to be clear; the PR just generalizes further.

NumericalIntegration (WPILib API 2023.2.1) has some higher order integration schemes if you need them. Admittedly, the API is cleaner in C++ because Java can’t express linear algebra well. We mainly use it internally for subsystem physics simulation classes and integrating dynamics in the nonlinear Kalman filter classes.

We’re looking at adding more for different use cases: Better numerical integration algorithms · Issue #4373 · wpilibsuite/allwpilib · GitHub

Actually, WPILib’s Kalman filter classes don’t have this restriction. WPILib’s KalmanFilter class can take any model of the form dx/dt = Ax + Bu. If you plug in a model with no dynamics (A = 0, B = I or 0, C = I, D = 0), you get a weighted average where the weights are defined by the standard deviations you plug in.

The KalmanFilter class assumes a constant Q and R, so the error covariance reaches a steady-state and you get a constant K. If you want to vary R or use a nonlinear model, use the ExtendedKalmanFilter or UnscentedKalmanFilter class instead.

The EKF and UKF classes take any dynamical model of the form dx/dt = f(x, u) and a measurement model of the form y = h(x, u). The process noise covariance matrix Q is constant but the measurement model and measurement noise covariance matrix R can vary with every call to correct() if you want.

Also, the pose estimator classes support variable measurement covariances in addVisionMeasurement() for precisely the use case of noisier pose estimates from farther away.

I noticed you have a Derivative class. FYI, WPILib added a more general version (LinearFilter.backwardFiniteDifference()) in January 2022. It can compute numerical derivatives of arbitrary orders with an arbitrary number of points (it’s a special case of a more general formula for finite differences that we also implement).


As I said in a previous reply, I’m aware that WPILib does everything my code does, but in order to get the same functionality you have to take a lot of extra steps which you have explained very well. If someone were to write the code based on they wouldn’t get that functionality. My library isolates and generalizes the very important functionality so that the extra steps aren’t necessary to achieve the functions.

Basically the goal of my mini library is to make higher functionality more accessible to the average FRC student.

As for the integration, the increase in accuracy is less than the typical standard error in drivetrain measurements but using the Euler integration means the odometry class can be generalized to all drive bases. I just prefer that generality.


Just a note- frc-docs is not intended to be an exhaustive reference of all of wpilib’s functionality. We have Java and C++ api documentation available for that purpose.

Ease of API discoverability is definitely something that we need to work on, though.


Differential geometry generalizes to all the drivebases too. That’s what the DifferentialDriveOdometry, SwerveDriveOdometry, and MecanumDriveOdometry classes use. You use forward kinematics to transform your wheel encoder measurements into chassis movement, then apply the pose exponential (Pose3d.exp()) to integrate it.

We’ve wanted to refactor the pose exponential part to reduce code duplication, originally proposed by this issue a while ago: General Pose Estimator · Issue #3703 · wpilibsuite/allwpilib · GitHub


I think I didn’t explain my meaning correctly, the fact that there are different classes for the different drivetrains is what I mean by the lack of generality. My odometry code is more general than the WPILib counterpart in the sense that a single class handles it all.

Also, it’s just another preference thing but I don’t really like that the WPILib odometry classes apply the kinematics. I’d prefer that to be done in user code so that the chassis velocity data can be fused with accelerometer data. That way the accelerometer data only has to be integrated once to velocity rather than twice to position if the team wants to fuse accelerometer data. My goal is for the system parameters to be in the same terms for each measurement and accelerometer data isn’t in the form of a constant curvature expression.

I don’t expect WPILib to change just because of my own personal nit-picks and preferences, that would be ridiculous on my part. That’s why I made my own code.

I think you missed my point. The pose exponential part of the odometry classes specifically is just as general as what you wrote because it’s working with the chassis movement instead of the wheel movement and it’s more accurate (it approximates with circular arcs instead of line segments).

For a curved trajectory, you’re looking at an integration error of 2 inches over a 6 foot run, which isn’t negligible. Here’s an old script I wrote demonstrating it and the figure it generates (to be fair, I labeled it poorly; it shows relative error of euler integration compared to twists).

Using position measurement deltas instead of velocity measurements helps cut down on the noise floor a lot, which is why we transitioned swerve odometry to that for 2023 (differential drive odometry was already using it).

Integrating acceleration for anything but really expensive IMUs tends to give really poor results. You don’t need to even single-integrate acceleration if you use a Kalman filter that includes drivetrain dynamics. 3512 tried this in 2020 to work around bad encoder odometry (poor wheel config on our part, and CV wasn’t working after months of trying):

UKF declaration: Robot-2020/Drivetrain.hpp at main · frc3512/Robot-2020 · GitHub
UKF predict() and correct() calls: Robot-2020/Drivetrain.cpp at main · frc3512/Robot-2020 · GitHub
Dynamics and measurement models: Robot-2020/DrivetrainController.cpp at main · frc3512/Robot-2020 · GitHub

We got the idea from 971’s 2019 drivetrain Kalman filter. The longitudinal acceleration measurement model is just the differential drive velocity dynamics Ax + Bu written into y = Cx + Du, and the lateral acceleration measurement is this.

We mounted the IMU to the bellypan, which ended up magnifying acceleration noise (bad). The measurement covariance for this was so high that the optimal contribution ended up being effectively zero. Proper shock mounting would have probably avoided this. Remember to mount the IMU in the the center of the chassis to avoid a lateral acceleration bias you’d need an angular acceleration measurement to compensate for (fun fact: the ADIS16448 and ADIS16470 do this in their IMU firmware to shift the center of percussion to the corner of the sensor package).


Interesting. The next logical step would be to engineer the shock mounting to isolate frequencies that are too fast. What constitutes “too fast”? Is it a Nyquist thing? Do you want to block vibrations faster than sampling rate/2?

1 Like

2 inches over 6 feet is negligible if you’re also using vision measurements, though it is admittedly more that I expected. I just think that between noise in the vision data and the occasional loss of traction of the drivetrain the extra 2 inches of precision will be lost regardless. The next paragraph explains why my method can potentially be more accurate in accounting for the loss of traction with the accelerometer.

It gives poor results as long as your wheels are touching the ground but in the case of the sharp changes in elevation presented by the cable cover and charging station the wheels may leave the ground and then the accelerometer data from a sensor such as a Pigeon becomes very valuable by comparison. It doesn’t replace odometry, but in those special cases it can help if it’s included in the sensor fusion. This is a theory that I have tested using a pigeon mounted to a piece of foam on the belly pan.

The trick is to not really integrate acceleration but to use the same math except integrate upon the previous velocity estimate and then use the result as an input to the weighted fusion along with the chassis velocities from encoders. This way the drivetrain data functions as an absolute measurement and the accelerometer data functions as a relative measurement. The result is that the estimated velocity will be accurate when there’s a loss of traction without drifting due to accelerometer error.

[edit] I added an example of this to the library under the examples directory.

How could your library be used to fuse together chassis speeds from encoders, vision data and accelerometer measurements ?

First I’d fuse chassis speeds and accelerometer data to get an estimate of velocity and then use that velocity in place of the raw drivetrain data to estimate position with vision.

Why re-implement the entire class to add a single function that just sugars two existing function calls?

Your implementation has other problems, but I want to focus on the higher-level mistake you’ve made. Re-engineering existing functionality like this is something you want to do as little as possible; it’s a waste of time and effort and can do active harm by violating the DRY principle at an ecosystem level.

Consider formatting these changes as modifications to the existing classes (either through subclassing or aliasing in the local namespace) rather than whole-cloth re-implementations. This way, you can upstream your improvements back to WPILib and share them through the existing ecosystem.