Behind the Bumpers - FRC 971 Spartan Robotics - Awesome Vision System

Behind the Bumpers with FRC 971 Spartan Robotics at Chezy Champs Infinite Recharge 2021. Check out the vision and other programming that goes into this awesome robot! https://youtu.be/y45P0BkcLlk

6 Likes

Always great to see how 971 is pushing the envelope!

Would love to hear more about the vision implementation. Any chance we will get a white paper/writeup?

Is this the method being used?:

Quick looking around online, looks like you might be using opencv, but maybe not their sift implementation:

Would love to hear any lessons learned/takeaways

5 Likes

Sorry, I missed this thread when it got created

Yes.

As far as lessons learned, I can’t speak too directly to the SIFT implementation–as the commit message says, Brian got it faster back during the original 2020 season, and we didn’t have major issues with framerate at that point. If I have my numbers right, we ended up getting 640x480 color images from each camera and could process them at ~10 Hz.

For context on the overall system, the rough background is:

I’d say the main lessons learned are:

  • Compared to 2019, this is much better than what we did then with trying to detect the retroreflective tape. That is mostly because the retroreflective tape in 2019 was extremely difficult to disambiguate between the different targets that were near one another, so the error distribution of the updates to the EKF were highly multi-modal, which made it easy for the localization to get lost. This also gives us a lot more flexibility in what we look for on the field, how much flexibility we have to see it from, etc.
  • The raspberry pi hardware hasn’t given us any issues so far.
  • We started to try to include more inertial measurements from the IMU, but never got around to doing so. This will be very helpful for avoiding getting lost when the drivetrain behaves in hard-to-model ways (e.g., we get shoved sideways by defense, or go over a bump in the field).
  • We currently have too much blur in the images to get good results when moving.
  • Capturing calibration images is labor-intensive and easy to screw up, but it is currently tricky to detect when you’ve screwed up until you actually look at it on the field. We developed some tooling (both for doing the calibration and for debugging it at runtime), but it is still error prone. We haven’t decided exactly what to do to address this, but we will probably look to perform some form of more automatic mapping in the future so that we can make the manual steps less error-prone.

In terms of overall performance, we don’t have great metrics. Once we’ve converged while sitting still, we are probably accurate in our on-field performance to within a few inches (although the way we do the corrections, we actually prioritize correcting for distance/angle to target, not absolute x/y position, since that’s what matters for hitting the target). But while moving, we can easily get off by multiple feet, particularly under defense. Yaw error tends to be reasonably low, since we essentially are just integrating the IMU, which won’t get off by much just over the course of a 2 minute match.

…there’s a lot more there, but I’ll leave it at that until I see more questions.

13 Likes