Determining vision target location

In 2020, what are ways to determine the location of the vision target? Does the Limelight report this automatically? Are there algorithms? My team has been struggling with developing our own algorithm over the past few days. We’ve successfully found a way to determine the four corners of the target “trapezoid”, but after that we get stuck.

If you’re curious about how the limelight works, I’d recommend looking at their documentation. Without getting too deeply into it, I was able to get a relatively decent automatic hatch alignment in maybe 3-4 hours starting from the example code they have posted on their website, not knowing a ton about computer vision/tracking.

The company that produces the camera also has a pretty thorough walk-through of how to tune the settings of the camera to effectively filter out everything but the light reflected back from the vision target.

By location I am assuming you mean the 3D location to the target relative to the camera. If you are just looking for the angle from the camera to the target, that is reported by the Limelight through its tx and ty values.

The Limelight does do it’s own 3D positioning calculations using OpenCV’s solvePnP method. This basically takes a model of the target corner coordinates in real life (in this case the trapezoid corners that you were able to detect, it would just be a CSV file containing where each real-life corner is relative to one another) and finds the 3D location of the target by using the same corner coordinates detected on the screen. This is all done through the Limelight, and you just input the model and it sends you the 3D location of the target (it actually sends you the location of the camera relative to the target, but it serves the same purpose).

The solvePnP calculations require the Limelight’s higher resolution mode to run, however, which greatly reduces the refresh rate of detection, and from what we have found also increases the latency of the whole process. That is what caused our team last year to experiment with our own methods of 3D calculations. They are not as accurate but were lot faster than solvePnP. I cannot speak for the accuracy of our calculations for the 2020 game, especially as you might be aiming at the target a lot further away than in 2019, but they worked quite well for us in 2019 so here they are if they help:

Basically what we did was divide the 3D location into a few steps, and in this case we decided only the top-down 2D location was necessary for our application in the 2019 game. The four values we cared about finding for the location of the target were: angle from camera to target, angle OF target, distance to target, and distance away from the perpendicular line extending out of our camera. That can be shown in the following:


The first value, the angle from the camera to the target, is given from the Limelight. When I add that into the diagram, you will notice how it forms a nice right triangle.

Therefore, all we now need to find is the distance (x), and we can solve the triangle to find y as well
(y = tan(tx) * distance). Distance can be estimated in multiple ways, two of which are described in the Limelight docs Case Study: Estimating Distance — Limelight 1.0 documentation (the first one, using pitch angle, is probably the best solution for 2020 since the vision target height is a lot higher than your camera height will be). What we did in 2019, since our camera and vision target were roughly the same height and we could not use the pitch method, was to estimate distance based on comparing the real life height of the vision target tape to the pixel height. This is described more in detail from this article: Find distance from camera to object using Python and OpenCV. This method should be useful instead if you want to get the distance to the vision target low to the ground at the area where you are fed balls by the driver station in the 2020 game, but if you want the distance to the high goal target the pitch method described in the Limelight docs is probably the best.
At this point we have tx, distance (x), and y, so the last thing we want is angle of the target. To do this, we simply treated each side of the vision target as a seperate 3D location, and then found the difference in distances and y values to create a right triangle around the target, and solve for the angle. You could also only find the difference in distance, and you know the length of the vision target as the hypotenuse of the triangle, so you can solve for angle that way too.

To achieve this with the Limelight, however, you would probably need to send over the raw corner coordinates and do your own calculation for the tx value of the left and right side of the target, since the Limelight only sends over the tx value to the middle of the target. Information on the calculations needed to find tx on custom points can also be found in the Limelight documentation: Additional Theory — Limelight 1.0 documentation.

I hope this helps at least get an idea for where to start, although im not sure of the accuracy of this with the 2020 vision target. Our team will most likely be trying it out in the next week.

5 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.