Limelight Real World Camera Positioning

My team and I have been working on producing the coordinates and angle of the camera based on this year’s vision targets. I myself have spent the last week intensely trying with no avail to get the camera position with our LimeLight v2. I have spent time looking at papers like that of the LigerBots, found here. In that paper there is good information. The problem comes with limelight. It is super easy to use and it has all the adapters for the PDP and it already has NetworkTables built in. I have tried using corners in the new update 2019.4. I need to not only to come up with the calculations but also what values from limelight to use, or maybe I am being too complicated and there is an easier way. We have a differential drivetrain, so no mechanum fixes please.

1 Like

This will become effortless with LL at some point over the next week.

6 Likes

Could you tell me what you know about the update?

I am not @Brandon_Hjelstrom or Greg, but I am guessing this is related to an ongoing discussion about supporting the solvePNP algorithm to get the camera’s position. In the last update, they added the capability to do corner detection which allows users to get four data points to plug into the algorithm. I know, from an email thread, that they were thinking about natively running the algorithm on the Limelight, but had some design decisions to work out and could not guarantee its release this build season. I am hoping this post is suggesting those decisions have been made, and an update supporting this is coming sometime in the next week :crossed_fingers:

1 Like

I am in hopes that the solvePNP algorithm will become native to LL. I would be very grateful if that could be added. As far as I know, it is completely possible with the camera and processor that make up the limelight.

My team bought limelight in order to do processing like solvePNP. We might not be able to compete at an equal level without it. I believe we can figure something out without it, but it would improve our level of competition.

Basic turn to target is best, change my mind!

In all seriousness, I would argue that anything over a basic turn to target is just a luxury to your driver. I think you could still be extremely competitive without crazy vision localization. Im curious your reasoning into why this form of vision might be required.

2 Likes

Basic turn to target does not take into account for turning perpendicular to the wall of the rocket, cargo chip, or hatch loading station. If you have found a way to use just heading in the same way, please let me know.

1 Like

designing a flexible (in placement angle, not matieral property) mechanism we have chosen to eliminate that problem all together.
If your driver can get within the rage of acceptable angle of attack your camera can just point you center and the rest is history.

As for simple solutions you could look into calculating target skew to get back on center. If your robot requires you approach perfectly I could understand why you need something like solvePNP.

3 Likes

Right, I agree. The design my team came up with does not shift. Therefore, my team needs solvePNP. This late in the season, it is too late to change something that pivotal, as we all know very well. Thank you for your suggestion. People need to see this post so they can see the many different ways there is to solve a problem. Right now, I have spent days trying to learn how to do this with LL just to find out it is not currently built in. My team wants to learn and we already spent $400.

Lol. It’s never too late to change anything…

Ok. That is true.

I’m not what ‘we’ are supposed to know, but pivots in functionality or strategy are exactly how the top-tier teams stay on top. Besides, we’re only 3.5 weeks into a 16 week season. It seems like there is plenty of time remaining to get the basic left/right tracking going, with the driver manually controlling forward/reverse.

1 Like

+1*intinity

I guess the left/right that you are referring to needs to take into account skew. That is why my team and I are after solvePNP. Another mechanism would make the elevator heavier and we should use the mechanisms we already have(drivetrain) to there full advantage. I take your point, and I think I might discuss such a design with my team.

We were able to get tracking on the ball working fairly with an offset LLv1 on our 2018 bot. In our testing, we simply needed to set the crosshair at the correct position when the ball was ‘close’, angle the camera properly, find some consistent lighting at the school, and let the LL track as we drove forward.

The robot wasn’t perfectly centered on the ball when the ball was 10 feet away; but by the time the ball was just a few inches away it was good enough to grab.

Not that tracking the cargo is particularly useful - unfortunately the LL also picks up the orange robot signal lights as targets.

1 Like

I understand, but it doesn’t matter what angle you come at the ball from, as long as it is in the center of your robot, or wherever your intake mechanism is, you are fine; however, you must align the robot so that it is flush with the wall of the rocket, cargo ship, and hatch loading station, AND keep it centered. That’s why it is different.

Even without it built in yet, the steps required to get it running on the RoboRIO with the latest corner update are pretty minimal. OpenCV is already installed on the RoboRIO. Here is the documentation for the Java API.

Brandon shared the following values with me via email. They may allow you to skip the calibration process (calibration done in inches):

camera_matrix: [ 2.5751292067328632e+02, 0., 1.5971077914723165e+02, 0.,2.5635071715912881e+02, 1.1971433393615548e+02, 0., 0., 1. ]
distortion_coefficients: [ 2.9684613693070039e-01, -1.4380252254747885e+00, -2.2098421479494509e-03, -3.3894563533907176e-03, 2.5344430354806740e+00 ]

My team was planning to give these a try tomorrow when the schools in MN reopen.

1 Like

I think the corners are for the blue square around the vision target. The problem is, they aren’t always the corners of the reflective tape, just the outline.

I’ve look into some opencv but how would I use it with limelight.