As the title says, what does the Limelight’s skew value (ts) actually return? My students are working on alignment and distance sensing using the Limelight for the power port. We have a fixed-angle shooter and derived a fantastic equation relating distance to the target and launch speed so we can fire from any distance back, but we don’t yet have a working solution for finding our true distance from any position on the field.
We are currently using the vertical angle (ty) to calculate floor distance to the target, but as has been pointed out here many times, that is no good when the robot is at an angle to the power port itself. We can align ourselves perfectly, but we’re not measuring the true floor distance to the target, since vertical angle is the same whether we’re perfectly perpendicular or off to the side.
We’re looking into various options to calculate true floor distance and are open to using the Limelight’s built-in 3D functionality, but it seems to be difficult to get values at greater distances, so we’d like to explore other possibilities before changing our strategy, and it’s hard to know what those possibilities can be without knowing what all of the values from the Limelight actually return.
Does anyone know, and are there any other thoughts or suggestions? Thanks as always!
As for calculating the distance, it definitely depends on your robot capabilities, but we’ve found that since we only shoot at the target when we’re looking right at it, it doesn’t matter if the equation is inaccurate when we’re at an angle. We use tx to point right at the target, and then use ty to calculate our shot.
If you need to be able to be accurate an angle, I think it should be possible to combine ts, tx, and ty to get a good distance estimate at an angle, but it’s probably more complicated.
You should only need tx and ty (and a bit of trig) to get floor distance. Skew is simply the angle of the blue bounding box to the yellow box, effectively showing how “crooked” the target is. This might be used to estimate an unsigned normal of the target.
Awesome, thanks! Yeah, unfortunately we’ve found that ty is only really a good measure of distance when we’re already perpendicular to the target. Just turning until tx=0 hasn’t worked all that well for us. We’ll keep cracking away at it!
Skew is interesting, but it is a difficult metric to use. To understand why, take a laser pointer which has a ‘+’ shape in it and shine it at the wall.
Angle the laser point up by a set amount of degrees (say 45 or 60 degrees). Notice that the top leg of the ‘+’ grows much more than the bottom leg.
Rotate the laser pointer left/right, such that the center of rotation is the same exact point on the wall and the vertical line of the ‘+’ does not grow or shrink. Notice how the horizontal line ‘skews’ such that is is no longer horizontal.
Re-level the horizontal line. Keeping the center of the ‘+’ in the same exact spot, move the laser pointer closer and at a higher angle. Notice how the top leg grows even more.
At this new distance, rotate the laser pointer the same way you did in step 2. Notice how the horizontal line gains much steeper of a slope much quicker than when the laser pointer was at a shallower angle. Also notice how there is a point in this exercise where the skew angle for the close distance matches the skew angle for the far distance.
There is a relationship between this projection geometry that I don’t fully understand. I vaguely remember something about this using spherical coordinates in Calc 3 in college for projections, but other than that I haven’t been able to figure out what to search for. Ultimately though, the localized azimuth from the robot to the target (using skew) is a function of skew angle, translation distance in (x,y), and elevation angle.
Can anyone shed some light on this? Perhaps @Jared_Russell ?
The reason that vertical angle doesn’t work when you robot is at an angle is that your camera image is a rectangle, but the projected section of 3D “real world” points at a given constant range is spherical:
So as your camera rotates along the “equator”, non-equatorial points of constant latitude actually appear to move higher or lower in the image. You can also see this effect by looking at your cell phone camera while you swivel in an office chair - look at what objects you can see at the extreme top and bottom of the image as your turn. Things that appear at the top when they are in the center (left to right) disappear completely when you turn a bit and they would be in the corners. So there’s a coupling between the x and y coordinates that needs to be taken into account!
The math to deal with this is straightforward but can be unintuitive. Think of tx and ty as the angles of a vector in 3D polar coordinates. Now figure out the x, y, and z components of that vector. If we say that “z” is the vector “into” the image, then we can assign it a unit length (1.0) for now. “x” would be 1.0 * tan(tx), and “y” would be 1.0 * tan(ty).
Note that this vector is not normalized; length is >1.0 unless tx and ty are both zero. To normalize, divide each component by the length (sqrt(x^2+y^2+z^2)). This normalization accounts for the spherical issue that couples x and y angles.
If your camera is not mounted perpendicular to the ground plane, now is when you’d want to compensate for its pitch (so if it’s pitched up 30*, you want to rotate your 3D vector down 30*).
Finally, your goal is to find the intersection between this 3D vector and the (known, constant) height of the target. The length of the base of this triangle gives you range. As an example, here is our 2019 code for doing this (note that what we call x,y,z may be different than your convention).
This is a fantastic explanation, thanks! I’d been looking into perspective projections but was curious if the limelight already provided a quicker pre-made solution.
Since we do our alignment by turning until tx=0, it looks like that would make “x”=zero in your example. Following that through the process you suggest, we’d end up with a “z” of 1.0, an “x” of 0.0, and a “y” of tan(ty), so our normalized coordinates will always be
x = 0.0
y = tan(ty) / sqrt(tan(ty)^2 + 1)
z = 1/sqrt(tan(ty)^2 + 1)
Yes, and you can simplify it further using the trig identity:
sqrt(tan(x)^2 + 1) = 1/cos(x)
We get:
x = 0
y = tan(ty) / sqrt(tan(ty)^2 + 1) = tan(ty) * cos(ty) = sin(ty)
z = cos(ty)
This is already normalized since sqrt(x^2 + y^2 + z^2) = sqrt(sin(ty)^2 + cos(ty)^2) = 1 no matter the value of ty. So when ‘tx’ (and therefore ‘x’) is zero, you can just use the ‘ty’ angle directly without having to do any correction for the value of ‘x’. The reason to use the 3D vector is to get a precise range estimate when the target is off center left/right.
So what you’re ultimately saying is that if we’re off to the side, but turn until tx = 0, we can find our range coordinate (z) by just taking the cosine of ty? And if our camera is tilted 16 degrees up, then we will add 16 to the ty?
Great, thanks! After looking through your code, I see that you’re taking the top corners to build your vertical coordinate in the viewplane, and then using that to scale your output. Is there any reason you chose to go that route over using ty directly? Also, I see a number of references to skew throughout your code, but I can’t find what you’re using it for. Is it just data collection? Thanks again!
ty is the center of the target facing the camera. This may not be the actual center of the vision target due to perspective effects (nearer parts of the target appearing bigger, etc.). Measuring the top corners ourselves lets us determine the actual center of the target.