Teams that calculated absolute position in the past (using encoders, gyros and vision)


For the 2020 season I want to try to calculate the absolute position of our robot using encoders, gyros and possibly vision. In my own simulation I’ve made I have been able to calculate the position using simulated encoders and gyros. However, after a long period of time, even in the simulation it becomes less accurate. My guess is the real thing will be even more inaccurate.

One thing I would also like to do is be able to make the calculation even more robust by using vision data to reset an accumulated position that starts to become inaccurate. I think I have a basic idea of how to do this: Use your vision data and current position to guess which vision target you are looking and and reset your position based on that vision target.

I’ve started to dive into this and I thought it would be helpful to see other teams’ code if they have done something similar. We used swerve during the 2019 season and will probably use it again in 2020, but even if you’ve done something similar with another type of drivetrain, I’d like to see it.

As a bonus, what have you guys used your absolute position for? The autonomous period, autonomous during the match?

1 Like

We didn’t run any autonomous code in 2019, but using 2019 as an example, it would be very easy to “zero” your current position after either grabbing or placing a game piece using vision alignment. This would remove any error accumulated from the previous movement. Of course, if you fail to align properly using vision, the rest of the auton may fail spectacularly, but you could also implement a maximum rate of change so that you don’t zero your position if the encoder based odometer position varies too far from the vision based position. We are working on implementing something like this for 2020.

Edit: Also we ran a robot position tracker based on encoder feedback in 2019. We didn’t ever use it, but it was pretty accurate, and probably could have been used as a primary driver feedback source along with a camera.

We used odometry (finding our absolute position on the field) throughout the autonomous sandstorm period in 2019 and used a combination of encoders, gyro, and our cameras in order to do this.

For most of the period, we used basic encoder-and-gyro odometry for path following. When we got close to our targets, we switched to a Vision controller in order to accurately place the hatch panel. At the end of the placement, our code assumed that the alignment completed successfully and relocalized (reset the position on the field) based on the absolute position of the vision target, the robot angle, and the distance between the intake and the center of the robot. This worked relatively well and helped eliminate some of the dead-reckoning odometry error from wheel slip / driving down the HAB, etc.

During the season, we did try to actively relocalize off of nearby vision targets while traversing a trajectory; however, this was too unreliable and we figured that we didn’t need it at all.

Let me know if you have any questions!

I guess this could be pretty unreliable while the robot is moving. I was thinking of resetting the position if the robot is close enough and is at a low enough speed. I’ll have to see how accurate this is on an actual robot.

Thanks for sharing your github repo.

1 Like

So, I’m not a programmer, but we gave this a try this year. It was largely using the gyro and accelerometer to approximate a position, and then used vision targets to re-localize when they come into view. If I remember correctly, the things that were killing us were gyro drift and the time lag of the vision data relative to the gyro data. I can bug the programmers for more detail if anyone is interested.

A Kalman filter is a mathematically rigorous way to accomplish this functionality.

DISCLAIMER: I have never successfully implemented one of these myself. What follows is a vague digest of a wikipedia page, uninformed by first-hand knowledge.

Think of it as a probability question.

Rather than relying on a set of sensors to say exactly where you are at all times, split the problem up. Maintain a separate “estimate” of your current position. Use sensor readings and robot events to periodically update that estimate with observations provided by the sensors. For example:

At the start of the match, you know with near absolute certainty where you are. The autonomousInit() call event can reset your position estimate to that known location.

After that, every loop, you can use your gyros and encoder measurements to change that estimation of where you are. However, as you identified, these sensors have noise. So a reading of “1 ft forward” probably means you’re about a foot forward, but maybe slightly more or less, or slightly off to the side. This means you still have an estimate of where you’re at, but with less certainty.

However, when you pick up a gamepiece or identify a vision target, the location information imparted by that event again gives you that high certainty. The event should have such high certainty that it is trusted more than your history of sensor readings.

The Kalman filter mashes these ideas of “observations” and “trustworthiness” into standard probability theory notions and lays out the math to manipulate it. I see the challenge to using it as two parts:

  1. Understanding the math of the filter to know how to properly use it
  2. Converting disparate reference frames (ie, encoder ticks, gyro rotation units, vision target pixel x/y/size), into a single reference frame (ie - drivetrain x/y/pose angle).

In general, accurate absolute position is super useful. It allows you to have the robot get itself to an arbitrary point on the field despite disturbances (ie robots running into you), and know when you get there. This could allow a robot to automatically go to certain parts of the field, and automatically perform actions when there.

To paint a particular blue dot:

I imagine a driveteam with only a large tablet to control the robot. When they want to pick up a gamepiece from a location and deliver it, they tap the loading station, and then the placement point, and just watch the robot do it perfectly, as fast as mechanically possible. The driveteam’s job is not to drive the robot, but rather command it based on the ongoing match dynamics. Think playing real-time chess, but every move causes robots to go flying across the field.

If you ask 900, they’ll say they want a fully autonomous robot. I imagine with this mindset they may even achieve a layer on top of my blue dot, where the robot itself reads the field state and decides which objectives which lead to the highest score possible. Totally putting words in their mouth though, I have no idea if this is in their 5, 10, or 15 year plan.

All of this is dependent upon first knowing with high accuracy where you are on the field. Once you know that, a lot of possibilities start to open up.

1 Like

We’ll let you know when we get there but I will say that 5 years used to sound like a long time to me.


Correct phrasing. I love it :). Looking forward to it!

1 Like

Sort of. A standard kalman filter is only strictly correct given a linear system with gaussian noise. In many systems that are neither strictly linear nor gaussian, it will still work well enough that you’ll have trouble doing better. For particularly non-linear systems (such as getting the absolute position of a robot on the field), an Extended or Unscented Kalman filter will generally work pretty well, but even those start to break down at certain points. E.g., if you have substantially multi-modal probability distributions (e.g., if you see a target and it could readily be either of two targets), then kalman filters can really start to break down and you need to either set things up to try to avoid those situations or start exploring things like particle filters.

1 Like

Understood, Much thanks! You have helped me begin to bridge problem #1 in my own head.

Along with the advice above, consider why you are getting a build up of these errors. Is it due to wheel slip, backlash, drifting, or low encoder resolution? Any, all, or none of these could be plaguing you, and you’d notice a position difference. Sometimes the best fix to software is to understand your hardware.

As for absolute position, I’ve seen teams use it both for driving and elevator motions.

1094 the Channel Cats did this in 2016 using LabVIEW.

We never measured how accurate it was. I can’t say it was accurate enough most of the time but it worked a few times and for another game it probably would be fine.
It averages our tank drive encoders, combines their value with the navX Yaw to get XY position, drives from XY point to XY point along a course (implemented with the slope intercept form), uses vision to correct (don’t know if it actually helped) and uses navX pitch to sense when the 2016 Stronghold defenses are crossed.

Team 1678 used simple linear odometry this year for field-relative positioning (“absolute” x,y,theta on the field). Working just off of encoders/gyro held up fine for the 15 second autonomous period. Due to the “slamming” nature of our aggressive vision we reset x,y odometry (but preserved theta because gyro is independent of wheel slip) at each vision score for redundancy.

In the past until 2019 season we have used a Kalman filter that combines encoder + gyro with a state-space model of the drivetrain (largely borrowed from 971’s 2017 drive code).

Using a Kalman filter became less viable because of the realtime requirement after we switched to CAN this year (CAN frames aren’t timestamped and latency can be more variant than our 5ms dt).

Quite frankly, odometry by assuming constant curvature over a timestep is perfectly viable for the 15second autonomous period. Interpolating/Extrapolating using some sort of timestamped tree/list/array can also add a degree of accuracy depending on precision requirements (less necessary this year due to vision targets). I believe 254’s RobotState has this functionality. Also, most teams use java at this point in FRC, so if you are a Java team, it can be quite a pain just to get fast linear algebra in Java that is necessary for any state-space model/Kalman filtering.

This year, Team 910 used vision targets (I believe at game piece acquisition and scoring times) to keep localization throughout the match. Add in their motion planning, and they arguably had an autonomous robot. Not sure if they’ve posted their technical binder, but they’d be a good team to model off of.