With computer vision for the last few years, we have been using solvePnP() from OpenCV to get the “pose” of the robot with respect to the vision targets. The “full” pose calculation gives the distance plus 2 different angles: the angle of the robot w.r.t. the robot-target line, and the angle of the target w.r.t. that line. Depending on the game, that 2nd angle (target angle) is very useful; for instance in 2019, that told us how to turn our robot to be perpendicular to where we were picking up/placing hatches.
However, the pose calculation, or at least solvePnP, can be very sensitive to the locations of the corners fed to the algorithm. From what I have seen, the target angle in particular can fluctuate by 10-20 degrees with what seems like a “random” change of a couple of pixels in the image (the distance seems to be less sensitive, and the robot-angle does not seem to care). The 2020 high goal target seems particularly challenging (IMHO) because the upper-output corners are particularly sharp and can appear rounded in the image (and even worse if you are going for a long shot).
What techniques/tricks have people been using to get better corners to feed to solvePnP, or do you have a better method(s) to get the “full” robot pose? Note that I am interested in getting all 3 pose values, not just distance or robot angle (target angle is not always needed, but partly I am curious and partly stubborn).
Note that I do understand why the pose is sensitive in the way it is. Also, I do have some tricks, and I will share those, but I want to see what advice others have first.
Thanks, and stay safe.