Team 4256 2018 Code Release

Team 4256 is proud to release our 2018 robot code:

Features:

  • some code comments
  • path definition using cubic splines
  • path definition using single argument lambda expressions to characterize any mathematical function (can be piecewise)
  • discretized path following with targets updated based on current location, not time
  • location based triggering of any lambda function
  • simplified classes for autonomous strategy definition
  • drivetrain, subsystem, and odometer interfaces to make autonomous compatible with any robot
  • odometer implementation for a single swerve encoder
  • odometer implementation for visual odometry from a ZED+TX2 running Stereolabs’ Python wrapper and a custom HTTP server

And no, we don’t have a Kalman filter :slight_smile: I’m our only autonomous + vision programmer and this was our first year using any of this stuff, so I didn’t have time. Advice on how to implement one is welcome, as are questions!

Awesome stuff, this is super clean and well laid-out!

Any details on the Zed visual odometry? A few teams around the area were interested in trying it out.

Thanks! The ZED code can be found here. Though it’s not as organized as the robot code, we spent a lot of time on it. As much as I would like to say that paid off, our autonomous looked like a mess as soon as other moving robots entered the picture (it was fine on our practice field). This is why we switched to an encoder for the Missouri state competition, at which point everything started working beautifully.

Would you say your visual odometry on the practice field was as accurate as encoder/gyro odometry? We’re looking at other options for odometry due to rumors of rough terrain in which case it would be very hard to use encoders and a gyro. Our encoders/gyro odometry right now is pretty accurate (~2") for the first 15 seconds.

In our testing, as long as the ZED didn’t rotate, it only got off by around 1in every 20ft. Rotation of the ZED, other moving objects, reflections from lexan, and lack of nearby objects all contribute to decreased accuracy. In (relatively) ideal conditions on our practice field, it performed as well or better than the encoder. One benefit of the ZED is that if it returns to a previous location, it can sort of auto-correct for any accumulated error.

I should also mention latency. I don’t have exact numbers, but I do know our robot tended to (slightly) spiral into its target position when using visual odometry. This happened because our PID always had to base motor outputs on old data. This could probably be fixed with filtering algorithms/using C++ rather than Python for ZED code.