Robot Localization

Hey CD,

In preparation for the upcoming season, our team has been focusing on areas for improvement identified from last year. This year they’ve done a great job and we’re at the end of our list; robot localization.

For context, last year our robot utilized PhotonVision for AprilTag detection, YASGL for our swerve drive, and the built-in SwerveDrivePoseEstimator. That was a success and we wouldn’t feel too bad if we needed to head down that path again. But with ~2 months left until kickoff, I thought this would be a good idea to teach the kids what’s going on behind the scenes and look to make improvements.

The biggest improvement I’d like to see the kids make is to pull more sensor data into their localization method. To my knowledge, I don’t see a good way to plug data from sensors like an IMU into the SwerveDrivePoseEstimator (but please correct me if this is wrong.). There were times last year where we took a good hit and our estimated pose would either take a while to recover or never recover at all.

In doing some research, I feel there’s a few paths we could dig into. Here’s what I’ve come up with options and pros/cons, but I’d love any input other teams have. I’m not an expert in this area, so corrections are always welcome.

Continue to use the built-in SwerveDrivePoseEstimator.

Pros:

  • A straight forward way to get fairly accurate results.
  • Can use the built-in framework without much added work.
  • Runs well directly on the Rio.
  • API plays well with the rest of the robot framework (PhotonLib for example).

Cons:

  • Hard to extend to incorporate other sensor data.
  • A bit of a “black box”. Makes it hard to troubleshoot if you haven’t covered the underlying details with the kids.
  • Potentially better performance to be had through other approaches.

Utilize a Kalman Filter (or some flavor of it).

Pros:

  • Efficient and could run well on the Rio.
  • Theoretically the “best” you can do (if implemented correctly).
  • Common in the industry so there’s good resources out there.
  • Can provide very solid results if designed and tuned correctly.

Cons:

  • A fairly complicated topic to be teaching kids who haven’t been exposed to linear algebra and probability, especially as you get into the more advanced (Extended) filters.
  • Have some theoretical limitations around gaussian inputs, unimodal belief states, and linear models that may require you to jump to the advanced versions.
  • Can be difficult to design (mapping multiple sensor inputs to the matrices that represent your model).

Utilize a Particle Filter.

Pros:

  • Much simpler to teach and for the kids to understand.
  • Easier to pull in sensor data, each sensor can “vote” on aspects of particles they can monitor.
  • Gets around some of the constraints Kalman Filters exhibit (non-linear behavior, gaussian noise, etc).

Cons:

  • Far less efficient than other methods, and would probably require running on a coprocessor.
  • Potential latency because of incorporating the coprocessor.
  • Can have problems converging on incorrect solutions and have a difficult time recovering from that.

Based on this I think we’ll do some experimentation over the next few weeks to see what works best. But before diving into that I’d love to hear what experiences other people have had in this area. Again, I’m not an expert here so any input would be appreciated!

1 Like

From what I understand, the existing pose estimators use a Kalman filter.

9 Likes

I think you’re correct. Maybe a better way to state the difference between the two options is that the built in pose estimator has a nice framework around it that makes working with the underlying Kalman filter easy. Compared to rolling your own where you’re free to implement your model however you want but then bear the burden or ensuring its correctness.

The SwerveDrivePoseEstimator isn’t a black box as all the source is included and you can step in and look at anything you want.

Also as for being hard to extend for other sensors. Just use the addVisionMeasurement if you have another way to produce a pose and use whatever standard deviations you wan with it. It doesn’t have to be vision.

2 Likes

Let me rephrase that. It’s friendly enough to use that it’s tempting to just plug in the data and go on your way without understanding what’s actually going on under the hood. This can cause issues while troubleshooting as you’re trying to get up to speed on what’s happening while also fixing the issue. Admittedly this is a universal problem with any “COTS” library and could be fixed by digging in and teaching the basics prior to the season.

That method takes in a Pose2d, which looks like it’s a position (Translation2d) and heading (Rotation2d). My thought was to incorporate things like velocity and acceleration into the filter, which I’m not sure I see how you could do with the current framework. Right now it seems you’d have to keep track of this data elsewhere and translate it into a Pose2d, which seems like it’s avoiding some of the power of using a filter.

Please correct me if I’m wrong. If I’m overlooking something simple that would definitely make life easier.

1 Like

The main thing I want with the existing pose estimators is being able to dynamically change the std dev of wheel odometry measurements, like I can for vision measurements. So I can trust the wheel odometry less, the longer it’s been since a vision pose update.

2 Likes

Ideally, wouldn’t this type of behavior be inherent to Bayesian filters? The longer you go without a position update, the larger your position variance gets. Maybe this is an artifact of using wheel odometry to provide an actual position when really they should just be providing heading/velocity information.

For anyone visiting this in the future, I want to suggest checking out this book/repository as a fantastic resource to get up to speed on this topic.

I’ve been walking through the chapters in preparation to get our kids up to speed, and honestly I think this will become a part of our “must read” for kids working on programming.

3 Likes

This is correct. The odometry measurement noise is constant (which is why we’ve rejected issues for making it configurable), but the error covariance increases over time such that CV pose measurements have a larger effect on the state estimate.

WPILib used to use an unscented Kalman filter with a good process model, bad measurement model (wasn’t a camera model), and latency compensation that replayed measurements and recomputed the past error covariances. The roboRIO wasn’t fast enough to run it. Instead, WPILib now uses a very poor linear process model and steady-state Kalman gains: allwpilib/wpimath/algorithms.md at main · wpilibsuite/allwpilib · GitHub

We’ll have to wait for 2027 to get an accurate solution teams can run in real-time.

2 Likes

Is this something one could run on a coprocessor right now, or is the latency required for odometry lower than the latency achieved using a coprocessor and NetworkTables?

I thought SwerveDrivePoseEstimator must take a gyro (IMU) measurement for rotation when you update it. (Maybe some teams without IMU adds up the Twist2d from the odometry to estimate rotation instead, but that isn’t the recommended approach.)

There’s R&D to do more complex stuff than that on a coprocessor: GitHub - mcm001/gtsam-playground

3 Likes

You’re probably correct. My point there was to incorporate the acceleration data into the filter, not just the heading from the gyro.

3512 did that in their 2020 robot code. Here’s the measurement model. The acceleration was so noisy that the optimal filter contribution was essentially nothing.

The more complex stuff also happens to magically deal with inserting new measurements somewhere in the past through the /magic of iSAM/ which is pretty fantastic. Definitely need more R&D on this project though - still looking for someone to own these gtsam projexts :))))

Good to know!

I know there’s risk in including higher and higher orders in your model, but considering you can directly measure acceleration I’m surprised it performed poorly.

Sounds like something we’ll get to test as we explore this topic.

we are still planning on starting to experiment with it. Just not sure how far we will get.

1 Like

For those that use a custom Kalman filter, how have you found the performance? We’re curious if we can rely on running this on the Rio.

Are you able to update each periodic tick, or do you have to limit it to something like every 10th tick?

For Java, I have no clue. However, in 2020, 3512 ran a 7-state, 2-input, 5-output unscented Kalman filter and an LTV differential drive controller on a roboRIO v1 at 200 Hz (real-time C++). Combined, they took 1.32 ms of the allocated 1.5 ms timeslice, with most of it being the estimator.

Latency compensation was very expensive for old computer vision measurements. We probably should have used a history buffer smaller than 1.5 s.

3512 switched to the WPILib odometry class in 2022 because the pose estimate quality was similar to what we were using before (encoders and IMUs have very low noise, so those measurements were almost entirely driving the Kalman filter model anyway).

2 Likes

Something @pietroglyph has mentioned but I’ve never actually had time to play with is robust noise models for un-modeled disturbances