Vision Improvements
Earlier this week , we talked about auto aiming our Shamper using robot localization. For this, we use a WPILib SwerveDrivePoseEstimator
along with corrective measurements from two Limelights. This works great… as long as your camera tracking works as intended.
Unfortunately, we were facing some critical inaccuracies in the vision measurements, resulting in us spending the weekend on debugging the issues.
Issues
Upon further inspection, the inaccuracies in our system can be broken down into two separate issues: intense twitching and lens reprojection.
Reprojection Errors
Let’s start with the most logical issue. When the robot directly faces an AprilTag, meaning a tag would be directly on the crosshair of the LL, distance measurements were accurate to the centimeter. However, when moving the tag further from the center, the measurement started to deviate from the actual value.
In our eyes, this was a clear issue of inaccurately calibrated lens reprojection. Unfortunately, the issue was way easier diagnosed than solved. We tried recalibrating the camera, hoping this would solve the issue.
A decently calibrated camera should have a reprojection error below 1. We weren’t able to get anything below 50 .
Theoretically, we could also tune this by hand. But we deemed it to be too time consuming. Besides, we had a more pressing issue at hand.
MegaTag Inaccuracies
When only looking at a single tag, the error mentioned above was present, but at least consistent. However, major issues started to arise when multiple tags were in view. As seen in the GIF below, the calculated MegaTag pose became super twitchy, while the individual measurements (the blue and green cylinders) stayed firmly in place.
Solutions
After the recalibration didn’t work, we decided to increase the filter on the vision measurements, meaning measurements with a high deviation would get discarded. This is what we used last year but upon closer inspection, this not only slows down the system, but more importantly, it makes the localization worse than not using vision at all.
PhotonVision
In the end, we decided to switch from the default Limelight OS to PhotonVision. We couldn’t find any downsides to PV as opposed to LL OS and we could always switch back if needed.
After carefully reading the documentation and doing the initial configuration, we arrived at a reprojection error between 0.3 and 0.5 (comfortably below 1). This gave us the confidence to further tune with PV to see how accurate we would be able to get it.
Getting the Robot Pose
To get the robot pose, we opted to use PhotonLib as an easy NetworkTables wrapper. Photon allows you to choose between different localization strategies. For now, we use the MULTI_TAG_PNP_ON_COPROCESSOR
as advised by the docs.
Further Tuning
We are currently in the process of further tuning the camera settings to improve tag detection. By getting the exposure as low as possible, we should be able to eliminate motion blur. This would be highly advantagious as it would mean we wouldn’t have to stop in order to recalibrate our position.
Result
After a long weekend of tuning, debugging, updating and even more tuning, I’m happy to say that we have arrived at a relative consistent solution as you can see in the GIF below:
(250% speed as my laptop was frying itself trying to render a 45s GIF)
Future Steps
During testing, we noticed that the LimeLights can now comfortably detect tags 6+ meters away. However, one of our LimeLights is pointed at such an angle that it will never be able to see tags that far. So we have one more request for the Beta Bot .
Furthermore, we can now start properly creating our auto modes and improve the auto aiming functionality, so definitely expect some videos of that the coming week!
Average Estimated Robot Pose This Weekend
Written by:
@Nigelientje - Lead Outreach / Software Mentor
@Bjorn - Lead Data Driven Decision Making
@Casper - Software Mentor