Using Limelight's SolvePNP to help with lining up on vision target

My team has been working with a Limelight v2 and the new SolvePNP just came out from Limelight, but we have been struggling to get our robot to properly line up. We use the first and third values from the array through NetworkTables and we use that to find the angle that we need to turn. We then use a gyro to turn to the angle we need. The link to my program is https://github.com/o-bots7160/Limelight-v1/tree/davidsAttempt and the work with the limelight and network tables are under robot.java. Excuse the messy code as it’s been a little hectic trying to figure it out. Any help or improvement suggestions to the code would be great.

What is the specific problem you are having? Do you line up with the cargo ship and suddenly find yourself on HAB3? (just kidding, that might be a nice problem to have) You’ll get more hep when you can provide details and ask specific questions.

I am a mentor on the team and I will try to give more specifics on what they are trying to do.
They are using the Limelight and the built in SolvePNP to determine the angle the robot is at in respect to target.
targeting

So in this case they want the robot to:

  1. turn 46° right
  2. drive 2’ to the center line
  3. turn 90° left
  4. drive 4’ to target

The robot always over or under shoots the 90° angle they are trying to get for the first step. They are printing to the console and the robot will post the starting angle and turn what it thinks it should be turning and the numbers it prints equal 90 but it is never actually sitting at a 90° angle to purple line in the drawing. It does get close sometimes and other times it will be off by a good 5-10°.
I think the biggest problem they are facing is the SolvePNP isn’t pulling consistent data. The image on the LL dashboard in the SolvePNP screen sometime has the robot on the wrong side of the target. Sometimes it doesn’t show any measurements and just moving the robot the slightest amount will make it start working. I honestly don’t think the issue is in the code. I think the issue is in the LimeLight giving the right data.
I haven’t been involved in the LimeLight stuff that much and maybe we are missing something to make this work better? Maybe we shouldn’t be trying to use it like this at all?

I’m not speaking from an experienced POV but it’s quite possible that you haven’t tuned your HSV values correctly and therefore it is as you’re saying pulling in inconsistent data.

There is, as usual, a compromise between performance and consistency with SolvePNP.

During the time Brandon was working on SolvePNP for the Limelite, we were doing the same with JeVois. We were pursuing for both HSV targeting and ArUco targets.
The bottom line is, if you want better consistency, you need to use higher resolution. The higher, the better. But, as you might expect, higher resolution means slower performance.
With lower resolution, even just a single pixel wobbling between identified and not, can radically change your results, therefore higher resolution is better.

Faster cameras, transfer protocols and processors will improve this with time, but right now, for FRC, we are right on the limits of what we can achieve within the budget constraints placed on us. As tech advances, I am sure we will see reliable high performance SolvePNP within just a couple years.

I know this thread is based on using the limelight, but could you get better performance using a Jetson?

Did you all continue working on this problem or abandon the SolvePNP approach?

This is possibly due to a limitation within solvePNP itself - depending on how you pick the points in the 2D image, they can “alias” or “mirror” to a correlated but opposite position.