Teams who used Limelight SolvePnP: How was your experience?

Hi everyone!

Since there seems to be no recent thread on this topic, I’d like to ask those who used the Limelight SolvePnP function to share their experiences, including accuracy, effects of the lowered framerate, and difficulty of setup.

Overall, I’d also like to hear whether you think Limelight SolvePnP is a viable way to integrate vision into autonomous pose estimation.

Currently, my team struggles to score 10 balls reliably in auto when using only encoder and gyro based odometry, since wheels would skid at higher accelerations. We’re looking into vision based pose estimation to deal with this issue, and it would be really great if Limelight SolvePnP could be sufficient.

For those who don’t know, Solve PnP calculates the orientation and position of the vision target using known corners of the vision target. In 2019, Solve PnP was introduced as (and still is) an experimental limelight feature.

The key to accurate solvePnP is accurate model points (the target’s location in the global coordinates) and high camera resolution. In 2019, we heard from a Limelight user that the Limelight’s model points were off a bit, so we would get like a foot or two of error. At the time, users couldn’t change the model points, so they’d be stuck with the accuracy it gave. There’s been a few seasons since then, so I don’t know how good it is now or if custom model points are supported.

We made our own thing on a Pi (GitHub - frc3512/Vision-2019: Vision pipeline for the 2019 FRC robot.), then tried Chameleon Vision with poor results for 2020, then switched to PhotonVision for 2021. PhotonVision lets you put whatever model points you want. Our accuracy was 6-7 inches at 1920x1080, if I recall correctly. PhotonVision’s GPU acceleration let us get 15+fps. We added latency compensation to that on the robot code side (see WPILib Pose Estimators — FIRST Robotics Competition documentation).

We ran out of time to get the pose data properly incorporated into our drivetrain and turret observers (there’s some tricky coordinate transforms between our drivetrain and turret, and it confused everyone), so we ended up just autoaiming with target yaw. It jittered a lot, which decreased shot accuracy; that’s why the latency compensation is important.

3 Likes

Thanks a lot for sharing. The WPILib Pose Estimators sound very interesting and we would probably try them in the future.