Arriving perpendicular using limelight

So I’m not a programmer, but my students are working with the limelight on our Deep Space robot and have now used it to arrive at the cargo ship, but depending on the starting position they arrive not perpendicular to the target. It seems like there needs to be another loop to control that.

I scanned through the limelight documentation, which they followed, but I didn’t seem to find anything on this topic. Can someone point me, well my students, in the right direction?:slight_smile:

You can use the 3D pose estimation feature (forgot the exact name) to get the position of your robot relative to the vision target. Then, you can project a point out from target to, say, 5 ft in front of it. if you drive to that point and then align to the target, you’ll be (theoretically) pointing perpendicular. Lots of better ways to do that with motion profiling and the like, but that’s a basic way.

By not perpendicular do you mean that you are still angled when you finish your auto align?

@RishabRao That is correct, an axis perpendicular to the camera passes through the center point between the two vision targets, but the hatch panel plane is not parallel to the vision target plane. The gap between the robot and the cargo ship is wedged shaped.

So it seems like there is another constraint required. I have not investigated the 3D pose estimation feature suggested by @solomondg. I’ll do that.

@solomondg do you know where the ‘pose’ feature is documented? I have to say I looked again and didn’t find much. There is a section in the documentation referring to ‘3D’ but it is threadbare. Still looking for a pointer to more information on this topic.

it should be called compute3D or “Solve 3D”. there’s not a lot of documentation because it’s largely plug and play

It’s also currently in beta last time I checked so it might not be fully functional

There’s also a little information in the changelog. http://docs.limelightvision.io/en/latest/software_change_log.html#id5

The OpenCV documentation for solvePNP will tell you what’s happening behind the scenes.

Just want to warn that Limelight’s solvePNP has two big issues that we ran into that eventually led us away from using it.
The first is that the image flips quite a bit, meaning that instead of returning that you are two feet to the left, it says you are two feet to the right of the target.
The second is that if the corners of the retro-reflective tape are covered, the output of the solvePNP changes quite a bit. If I remember correctly, having a hatch covering the tape a little on the rocket changed the result of the perpendicular distance up to 2 feet.

Here’s a thread here with some info on it:
https://www.chiefdelphi.com/t/limelight-2019-5-3d-breaking-changes-and-darker-images

1 Like

Another way to get 3D image data using the Limelight that (in my opinion) is better for this specific use case is to take advantage of the fact that you know how far off the ground your Limelight is as well as the angle that it’s facing. We took raw contours from the Limelight (specifically the two contours—one for each side of the target—closest to the center of the camera, subject to a little bit of filtering). Then, since we knew the relative angle of the targets relative to the Limelight based on their position in the image, we were able to take an imaginary line running straight out of the limelight (i.e. the center of its FoV) and rotate it by those angles. Then, we found where the resulting lines intersected an imaginary plane perpendicular to the ground at the exact height of the center of the vision targets, which gave us the X, Y, and Z coordinates of the center of both targets relative to our robot (and with some pretty simple geometry, the angle of the scoring location as a whole). For all of the geometry calculations, we used Apache commons-math to make our lives easier and to handle it all in an object-oriented sort of way.

1 Like

Would it be possible to use a gyro to determine the robot orientation relative to the face of the target and then plot a path based on the data from both the limelight and the gyro so that the end of the path is perpendicular to the face of the target?

1 Like

We tried something similar with solvePnP on our JeVois camera – we generated an on the fly trajectory to the target using the target x, y, and rotation and followed it using a closed loop controller.

It worked relatively well, but it had some caveats. It only looked at the target once and then generated a trajectory. If the capture used to generate the trajectory is invalid, it leads to some weird trajectories. One way to get around this is to average the target info over some period before generating a trajectory. Another option would be to regenerate continuously with new data.

In the end, we decided it wasn’t necessary to be perpendicular to the target due to the way our intake was designed. We ended up using a simple P loop on angle to the target with the driver controlling the linear velocity at all our competitions.

1 Like

We had the exact same intention during build season in order to get our intake perpendicular, here’s what we tried, and what we learned from it.

  1. Using the experimental 3D pose estimation from the limelight is unreliable at distances greater than 2-3 feet or so, mainly due to a fundamental requirement of SolvePNP needing clear cut corners (something you won’t find, especially when picking up a hatch) and because of high error rate at those distances.

  2. We tried doing the running average of the target info and onboard gyro and then using some trigonometry to calculate the position of the target relative to the robot in order to then generate a trajectory as @Prateek_M and @wgorgen suggested, but the requirement there is for the target size and dimensions to change significantly, by which the robot is already too close to the target (this method also produced significant error due to lighting changes from the movement.)

Ultimately we settled for a simple PID that is activated along with the spline that we run during autonomous and teleop. Simply activating this a solid 5-7 seconds before contact worked well enough.

In my opinion, while you could find a hacky software solution, a mechanical modification might yield better results compared to the time you would have to put in to do otherwise.

1 Like