Robot positioning with Limelight?


#1

I have been looking at LigerBots awesome documentation on vision tracking using a coprocessor (A Step by Step Run-through of FRC Vision Processing)

They are using the data opencv provides in solvePnP() and a bunch of calculation to find angle 2 (thats not a great explanation, and I highly suggest anyone wanting to do vision to check out their PDF), which would be needed to line of the robot so it’s perpendicular with the target, and not just yaw to face the target.

Angle 2 is defined in the picture, which is taken straight from the PDF.

I have not been able to find an equivalent using the data Limelight provides.
Is this possible, and if so, whats the best way to go about it?

Thanks!


Getting Corners with Limelight
#2

Unfortunately, the downside to using a device as easy to use as the limelight is that it is not fully featured. I don’t think that this is possible with just the limelight as you might not be able to receive the values necessary to plug into the PnP function. Looking at the NetworkTables API for the limelight shows that you can work with these values.

“Best” Contour information:

tv Whether the limelight has any valid targets (0 or 1)
tx Horizontal Offset From Crosshair To Target (-27 degrees to 27 degrees)
ty Vertical Offset From Crosshair To Target (-20.5 degrees to 20.5 degrees)
ta Target Area (0% of image to 100% of image)
ts Skew or rotation (-90 degrees to 0 degrees)
tl The pipeline’s latency contribution (ms) Add at least 11ms for image capture latency.
tshort Sidelength of shortest side of the fitted bounding box (pixels)
tlong Sidelength of longest side of the fitted bounding box (pixels)
thoriz Horizontal sidelength of the rough bounding box (0 - 320 pixels)
tvert Vertical sidelength of the rough bounding box (0 - 320 pixels)

Advanced Usage with Raw Contours

tx0 Raw Screenspace X
ty0 Raw Screenspace Y
ta0 Area (0% of image to 100% of image)
ts0 Skew or rotation (-90 degrees to 0 degrees)
tx1 Raw Screenspace X
ty1 Raw Screenspace Y
ta1 Area (0% of image to 100% of image)
ts1 Skew or rotation (-90 degrees to 0 degrees)
tx2 Raw Screenspace X
ty2 Raw Screenspace Y
ta2 Area (0% of image to 100% of image)
ts2 Skew or rotation (-90 degrees to 0 degrees)

An alternative solution is using the limelight to generate your camera stream and your source for images, and then running a PnP algorithm off of those images on a coprocesser such as a Raspberry Pi.

Another solution that I am not sure about is creating your own custom Limelight GRIP pipeline. Download the Limelight fork of GRIP from here, which will allow you to generate code that runs on the limelight. It might be possible to detect the corners of the target using GRIP and publish those values to NetworkTables, which is an option available in GRIP. From there, you might be able to plug those into the PnP function as your camera coordinates.


#3

Hi; my first post to CD, hope it helps. We’ve been looking at similar uses for the Limelight, and I think I understand how to solve your issue. Try this:

I assume you have a gyro of some ilk on your robot (without it, accurately turning to a given angle is difficult); and I assume the gyro is calibrated to give you a reasonably accurate value for the robot’s absolute angle relative to the field at any given time. (Gyros will drift, but can also be recalibrated on the fly when you know the robot’s orientation). Assume in your image above that the target is perpendicular to the field, so the “approach line” protruding from the target (along which you’d like to approach it) is at 0 degrees field-relative.

Now, call the robot’s current field-relative angle theta. You can see that the field-relative angle of the line between the robot’s camera and the target is just (theta - angle1). But, a line drawn from the robot’s camera at zero degrees will be parallel to the field and hence parallel to the approach line; so by the parallel postulate, angle2 must also equal (theta - angle1). That is: angle2 is the robot’s current field-relative angle minus the angle between the camera centerline and the target, when the target is perpendicular to the field.

If the target isn’t perpendicular to the field, the same reasoning applies, but everything is rotated by the target angle.

Note that from this information you can also determine the distance the robot needs to drive to reach the approach line. The third angle of the triangle (the angle between the robot’s direction and the approach line) is angle3 = (180 deg - angle1 - angle2) . By the Law of Sines,
sin(angle3)/Distance = sin(angle2)/DDrive
where DDrive is the distance you need to drive to reach the approach line; and angle3 is the angle you will need the robot to turn to be aligned with the target.

Note: we haven’t actually tried this yet, and someone definitely should check my math! But, I hope it helps!


#4

I like this! Seems a lot simpler then solvePnP. My only fear is how hard the programming is gonna be, as every target on the field is gonna have a different field angle, unless Im misunderstanding something. 3 sides of rocket and the 3 sides of the cargo. Possibly the operator could select which target the robot’s looking at, or do some sort of position mapping. Also depending on which way the robots facing (towards the driver/ away from driver) the angles will be different.


#5

I haven’t tried this, but theoretically we could use the height of both the left and right vision tape to determine which is closest to the robot. Just driving the bot as close as perpendicular as possible (while still being able to view the vision tape), driving until both pieces of tape are the same size, then turn perpendicular to the wall.


#6

Guys the best image processing hardware is your eyes.
For the first time we are allowed to use them in autonomous time and you are depending on machines to calculate ?
Besides its much more fun to drive than to watch it drive by itself.

Best regards
Shadi


#7

While it is true that we can manually drive in Auto this year, I think that we should all remember this quote from Dean.

FIRST is more than robots. The robots are a vehicle for students to learn important life skills. Kids often come in not knowing what to expect - of the program nor of themselves. They leave, even after the first season, with a vision, with confidence, and with a sense that they can create their own future" - Dean Kamen

By programming an autonomous, kids are using the robots to learn skills that they can and most probably will use throughout their life, which is the true purpose of FIRST. I say let teams use vision if they want to as long as they learn something from it.


#8

I disagree. From a driver’s point of view there will be a lot of latency from the robot to the FMS to the driver station and back to the robot again. As well as the limited view of one or two cameras, sight will be very little in the first 15 seconds. Looking through the driver station wall in VR also shows that it’s extremely hard to view the back side of the rocket. Any way you can have your robot assist your drivers will help you. People aren’t perfect, but you can create software to make the driver look better than they are.


#9

It is not necessarily true that people are the most efficient at placing. For example, regardless of camera latency, having a consistent setup with vision processing can prove to be faster than human control. This is because every cycle can be controlled with a motion profile or set instructions for theoretically optimal movement from both a chassis and scoring mechanism perspective. The effects of this would allow it to be used both in the Sandstorm Period and in the Teleoperated Period. Humans can better interpret where we are on the field but the actual execution to score has the potential to be better automated.