Gyro+Encoder Pose Estimation

Yesterday, team 449 competed at IROC (thanks to 1885 for hosting, 1086 for picking us, and 620 and 1915 for being awesome alliance partners), and I got a ton of awesome match data, which you can download here. Using the match data for our second quarterfinal (it’s the log dated to the epoch), I combined gyro and encoder readings to get robot position over time.

Here’s a video made from the data.

It drifts quite a bit over time because we don’t have vision to get an absolute position, but you can see the maneuvering from the shape of the curve, and it’s fairly accurate. The auto is close to exact, although I can’t tell because the field drawn onto that video is from the field CAD, and so is a bit off from the actual IROC field. If anyone got a video of this match, I’d love to see it to be able to do a side-by-side comparison.

We calculate the pose based off two assumptions: that the NavX is perfect (it’s close, but will drift a few degrees over the course of a match), and that the wheel scrub is equal on both sides. We do so by getting a movement vector angle, which is the average of the current and previous gyro angle readings, and a magnitude, which is 2*(average of left and right wheel movement)/(change in angle)*sin(change in angle/2). When change in angle approaches 0, this approaches the average wheel movement, so we manually handle the delta angle = 0 case that way. You can see an explanation of this math on our wiki. An interesting result of this method is that it gives us an effective wheelbase for the robot (given by (deltaLeft-deltaRight)/deltaAngle), which, IIRC, several other teams, including 254 , measure for use in motion profiling.

We found that our effective wheelbase was 26.6536 inches, versus the actual, measured wheelbase of 27.138 inches. Error in wheelbase diameter is roughly proportional to 1/sqrt(angular velocity), as shown here:

Wheelbase diameter is seemingly not related in any way to linear velocity or the radius of the circle the robot is turning around, as shown by these two charts:

If you want to run these analyses on your own robot, the R scripts I used are on our github, in the R scripts directory. As of right now the script is in the noah_pose_estimation branch, but soon that branch will be merged into master and deleted.

It’s worth noting that, while the “average effective wheelbase” for this robot is quite close to the geometrical wheelbase, this is not always the case; this robot is quite short, and thus there is little wheel scrub.

Our test robot, on the other hand, is longer and has an “effective wheelbase” that is quite a bit wider than the geometrical wheelbase.

That’s a really neat use of big data! I may have to borrow that idea and try it on one of our bots. Thanks for sharing!

This is really neat! Good to see R getting some more use.

I taught myself R a couple years back during an internship, and found it so useful that I have made it something of a mission to ensure that as many people learn it as possible (which, of course, includes our programming team). I’ve yet to find another data-handling language that is as easy to write and that has a comparable quantity of high-quality user-written packages.