Limelight Real World Camera Positioning

This is where the strategy in approach could change. The drivers will get better at depth perception with more practice, and will find ways to get the robot 90% flush. After the vision system guides the robot the mechanisms themselves can hopefully make up for the remaining 10%.

You are absolutely correct. I have a bigger plan for my team than just winning. I want them to learn the possibilities that technology has, even with a simple co-processor that could be scaled to the advanced automation that takes place in cars today. Take manual cars vs. automatic, manuals tend to run better gas mileage and it’s not very hard to drive them, but you still have to have some practice in order to learn. I want to be able to put a driver with almost no experience in an off season event and make him/her look like a pro. So I would rather give them an automatic so they can focus on other things, such as timing and how much is left in the match, etc. I know you’ll understand. I believe that this year’s “autonomous” will be much different as we can see all across CD. The biggest change is going to be off the field. I believe because we are talking about it more, student will learn more about automation and when and how to use it. If we fail with solvePNP, then we will learn that we should probably count of drivers. I just had an idea. I think we should do a follow up on this thread/topic later in the season. We might find that a mix is best, sometimes letting humans have control, and other times have it fully autonomous. What we should include in this thread for people is try to find all the LL documentation and vision documentation that isn’t included in the official LL documentation. Is anyone willing to do that? I don’t have any of our own, yet, but we would in the next couple of months. Just ideas to help students learn.

I’m not sure how you would process the image for general OpenCV use. However, SolvePNP does not require an image to run, just the pixel coordinates of the points of interest. Limelight provides these points over network tables.

The points are the points of the blue box, are they not? The blue box is not the same as the contour of the reflective tape, it is the rectangle around it. That is why we need solvePNP to be built or or LL needs to give the corners of the contour to NetworkTable, not the corners of the blue box. Please let me know if there is a way to just use the blue box that you yourself have found to work.

I’m not sure, I have never run it in person. I am a remote mentor working with a team in Minnesota who has been without robot access due to weather this week. The release notes mentioned a “corner approximation” slider. I wonder what its behavior is? :thinking:

You are right that we need points corresponding to known world locations (ideally, the corners of the reflective tape). I will let you know what my team discovers today as they get back into the shop.

Thank you. If you could do some testing with it that would be awesome. I have done some of my own and I come out with no clear results, just time spent learning that the team needs contours of the tape, not the corners of the blue box, because like you stated, you have to have real-world coordinates.

Hey guys, the corner points are the actual corners of the contour. We added these so you can use them with solvePNP. More info is coming soon.

1 Like

Oh, I was unaware. Thank you for letting us know. If you could still include it that would be great.

We were able to get an initial system that worked reasonably well (1 inch, 3-4 degree precision). Can confirm the actual corners were used. Here is a barebones Java implementation:

public class VisionProcessor {

    private MatOfPoint3f mObjectPoints;
    private Mat mCameraMatrix;
    private MatOfDouble mDistortionCoefficients;

    private NetworkTable mLimelightTable;

    public VisionProcessor() {
        // Define bottom right corner of left vision target as origin
        mObjectPoints = new MatOfPoint3f(
                new Point3(0.0, 0.0, 0.0), // bottom right
                new Point3(-1.9363, 0.5008, 0.0), // bottom left
                new Point3(-0.5593, 5.8258, 0.0), // top-left
                new Point3(1.377, 5.325, 0.0) // top-right
        );

        mCameraMatrix = Mat.eye(3, 3, CvType.CV_64F);
        mCameraMatrix.put(0, 0, 2.5751292067328632e+02);
        mCameraMatrix.put(0, 2, 1.5971077914723165e+02);
        mCameraMatrix.put(1, 1, 2.5635071715912881e+02);
        mCameraMatrix.put(1, 2, 1.1971433393615548e+02);

        mDistortionCoefficients = new MatOfDouble(2.9684613693070039e-01, -1.4380252254747885e+00, -2.2098421479494509e-03, -3.3894563533907176e-03, 2.5344430354806740e+00);

        mLimelightTable = NetworkTableInstance.getDefault().getTable("limelight");
        mLimelightTable.getEntry("pipeline").setNumber(0);
        mLimelightTable.getEntry("camMode").setNumber(0);
        mLimelightTable.getEntry("ledMode").setNumber(3);
    }

    public void update() {
        double[] cornX = mLimelightTable.getEntry("tcornx").getDoubleArray(new double[0]);
        double[] cornY = mLimelightTable.getEntry("tcorny").getDoubleArray(new double[0]);

        if (cornX.length != 4 || cornY.length != 4) {
            System.out.println("[ERROR] Could not find 4 points from image");
            return;
        }

        PointFinder pointFinder = new PointFinder(cornX, cornY);

        MatOfPoint2f imagePoints = new MatOfPoint2f(
                pointFinder.getBottomRight(), 
                pointFinder.getBottomLeft(),
                pointFinder.getTopLeft(), 
                pointFinder.getTopRight()
         );

        Mat rotationVector = new Mat();
        Mat translationVector = new Mat();
        Calib3d.solvePnP(mObjectPoints, imagePoints, mCameraMatrix, mDistortionCoefficients, rotationVector, translationVector);

        System.out.println("rotationVector: " + rotationVector.dump());
        System.out.println("translationVector: " + translationVector.dump());
    }

The most annoying part was that the order of the points was not consistent so we had to implement a helper class (PointFinder) to detect which point corresponded to the different corners of the tape.

That other thing we struggled with was the coordinate system for the pixels. While we have not confirmed this, we believe the origin is the top-left of the image, positive x is right and positive y is down.

Very nice. Yes, I too noticed that the corner ordering was inconsistent and seemed dependent on pose.

Were the camera matrix and distortion coefficients you got from Brandon for a LL v1 or v2?

v2

Ok. could you share your code on point finder because I am still in the dark as to what the return of getBottomRight(), etc. are. I have tried to get it working without using PNP and it is still not working. Basically I have tried to create my own PNP and it is not working.

So, I have to post a conclusion that my team came to after this Saturday’s testing. By just using the calibration of the cross hair, meaning positioning the robot at the spot you want it to stop, using dual target to get both sides of the target, using top position of the blue box around the targets, we were able to pid aiming and ranging (side-to-side and distance) almost decent. We will play with numbers, that is kp and kd for both aiming and distance, but you really don’t need solvePNP this year. We did the same thing with the cargo, worked perfectly. We chose the top but you could choose to do the bottom of the target depending on the height and position angle of your camera. If you have any questions or want the code, please message me.

I hope I don’t sound like a total idiot, but how do you use the rotation and translation vectors to determine the real life position of your robot relative to the target?

mObjectPoints = new MatOfPoint3f(
new Point3(0.0, 0.0, 0.0), // bottom right
new Point3(-1.9363, 0.5008, 0.0), // bottom left
new Point3(-0.5593, 5.8258, 0.0), // top-left
new Point3(1.377, 5.325, 0.0) // top-right
);

Also what units are you using for these coordinates? Have you scaled them in such a way that the translation vector can be directly interpreted as inches? If so, how did you scale it?

We use the translation vector directly, with Z being the distance away and X being the distance to the side. @Brandon_Hjelstrom mentioned in an email that SolvePNP by default yields a camera space transform, and provided the following solution (In C++) to convert to object space:

Mat R;
Rodrigues(rotationVector, R);
R = R.t();
translationVector = -R*translationVector;

I think this should be the equivalent in Java (side note, I really don’t like the Java APIs):

Mat objectSpaceTranslationVector = new Mat();
Core.gemm(rotationMatrix, translationVector, -1.0, new Mat(), 0.0, objectSpaceTranslationVector);

However, we have found the values in the translationVector from the Java API to be more or less accurate (See disclaimer at end of post).

For the rotation vector, SolvePNP gives you a compact representation Rodrigues notation. To convert this into Euler Angles (ROLL, PITCH, YAW), we do the following:

Mat rotationVector = new Mat();
Mat translationVector = new Mat()
Calib3d.solvePnP(mObjectPoints, imagePoints, mCameraMatrix, mDistortionCoefficients,
                 rotationVector, translationVector);

Mat rotationMatrix = new Mat();
Calib3d.Rodrigues(rotationVector, rotationMatrix);

Mat projectionMatrix = new Mat(3, 4, CvType.CV_64F);
projectionMatrix.put(0, 0,
        rotationMatrix.get(0, 0)[0], rotationMatrix.get(0, 1)[0], rotationMatrix.get(0, 2)[0], translationVector.get(0, 0)[0],
        rotationMatrix.get(1, 0)[0], rotationMatrix.get(1, 1)[0], rotationMatrix.get(1, 2)[0], translationVector.get(1, 0)[0],
        rotationMatrix.get(2, 0)[0], rotationMatrix.get(2, 1)[0], rotationMatrix.get(2, 2)[0], translationVector.get(2, 0)[0]
);

Mat cameraMatrix = new Mat();
Mat rotMatrix = new Mat();
Mat transVect = new Mat();
Mat rotMatrixX = new Mat();
Mat rotMatrixY = new Mat();
Mat rotMatrixZ = new Mat(); 
Mat eulerAngles = new Mat();
Calib3d.decomposeProjectionMatrix(projectionMatrix, cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ, eulerAngles);

double rollInDegrees = eulerAngles.get(2, 0)[0];
double pitchInDegrees = eulerAngles.get(0, 0)[0];
double yawInDegrees = eulerAngles.get(1, 0)[0];

Inches. We define the bottom right corner of the left vision as the origin, y axis is positive going up vertically, z is positive facing out, and x is positive to the right.

DISCLAIMER:
At the time I mentioned we were getting reasonable results, that was with minimal testing all very close to the goal (within 4 feet) and with little yaw. Upon further experimentation, there are still a few kinks we need to work out. Namely, we experience rapid jumps from relatively small changes in image points. For example, when we place the robot 5 feet back and 2 feet to the right with 0 yaw, we read the following image points and compute the following poses:

{48.0, 138.0} 
 {40.0, 136.0} 
 {45.0, 114.0} 
 {54.0, 114.0} 
 X = 2.4235989629072314 
 Y = 1.2370888865388812 
 Z = 4.717115774644273 
 ROLL = -7.555688896466208 
 PITCH = 165.9771402205544 
 YAW = 1.5292313860396367 
 ============================= 
 {48.0, 138.0} 
 {40.0, 136.0} 
 {45.0, 114.0} 
 {53.0, 114.0} 
 X = 2.864381855099463 
 Y = 0.9925235082316144 
 Z = 4.605675917036408 
 ROLL = -7.962130849477691 
 PITCH = 168.14583005865828 
 YAW = 6.697852245666419 
 ============================= 
 {48.0, 137.0} 
 {40.0, 136.0} 
 {46.0, 112.0} 
 {53.0, 114.0} 
 X = -3.3067589122064986 
 Y = -0.2727418953073936 
 Z = 4.393018415532629 
 ROLL = -6.929120013468928 
 PITCH = -168.6014586711855 
 YAW = -59.587627235667476 

So, take what we’re doing with a grain of salt. We had to shift priorities to other tasks last week, but are looking to hop back into this at the end of this week. Also, it sounds like guides directly from Limelight are going to be released. Hopefully, they can provide more insight.

Cheers,
Bart

1 Like

I also wanted to point out something cool we discovered. You can run solvePNP in JUnit tests. If interested, to do this, add the following to the dependencies in your build.gradle file.

testCompile group: 'org.openpnp', name: 'opencv', version: '3.2.0-0'

Then, in the constructor to your test class, you can load the library using the following:

System.loadLibrary(org.opencv.core.Core.NATIVE_LIBRARY_NAME);

This is really convenient since we can take a bunch of measurement from known locations and modify our algorithm until it outputs values close to them. It also allows us to continue working on this without the camera or field elements.

1 Like

Do you think if there was a way to get the corners of both pieces of tape with limelight dual target that SolvePNP would get better results?

mjbergman, dual cross-hair mode is very useful if your limelight has to be mounted off-center on your robot. The way dual cross-hair mode works is you can calibrate the cross-hair for two different distances. So you’d position your robot perfectly centered on the goal up close and calibrate once, then pull your robot out to as far away as you’d like to start tracking and calibrate again. An offset camera will have different calibration points for these two positions. As your robot drives closer, the cross-hair will automatically interpolate between these two points. We’ve been experimenting with a robot with using an offset camera to guide the robot up to the goal using this method with good results so far.

Hope this makes sense, we really need to write an article about this and other things for this year’s game but we’ve been really busy on some code updates so far.

I heard about that. Ours will be mounted in the center this year, but it is something to think about other years or other teams that add it as an after thought.

Any updates on when the SolvePNP Limelight update is coming out? Thanks so much for everything you have done so far with the Limelight!