Very nice. Yes, I too noticed that the corner ordering was inconsistent and seemed dependent on pose.
Were the camera matrix and distortion coefficients you got from Brandon for a LL v1 or v2?
Very nice. Yes, I too noticed that the corner ordering was inconsistent and seemed dependent on pose.
Were the camera matrix and distortion coefficients you got from Brandon for a LL v1 or v2?
v2
Ok. could you share your code on point finder because I am still in the dark as to what the return of getBottomRight(), etc. are. I have tried to get it working without using PNP and it is still not working. Basically I have tried to create my own PNP and it is not working.
So, I have to post a conclusion that my team came to after this Saturday’s testing. By just using the calibration of the cross hair, meaning positioning the robot at the spot you want it to stop, using dual target to get both sides of the target, using top position of the blue box around the targets, we were able to pid aiming and ranging (side-to-side and distance) almost decent. We will play with numbers, that is kp and kd for both aiming and distance, but you really don’t need solvePNP this year. We did the same thing with the cargo, worked perfectly. We chose the top but you could choose to do the bottom of the target depending on the height and position angle of your camera. If you have any questions or want the code, please message me.
I hope I don’t sound like a total idiot, but how do you use the rotation and translation vectors to determine the real life position of your robot relative to the target?
mObjectPoints = new MatOfPoint3f(
new Point3(0.0, 0.0, 0.0), // bottom right
new Point3(-1.9363, 0.5008, 0.0), // bottom left
new Point3(-0.5593, 5.8258, 0.0), // top-left
new Point3(1.377, 5.325, 0.0) // top-right
);
Also what units are you using for these coordinates? Have you scaled them in such a way that the translation vector can be directly interpreted as inches? If so, how did you scale it?
We use the translation vector directly, with Z being the distance away and X being the distance to the side. @Brandon_Hjelstrom mentioned in an email that SolvePNP by default yields a camera space transform, and provided the following solution (In C++) to convert to object space:
Mat R;
Rodrigues(rotationVector, R);
R = R.t();
translationVector = -R*translationVector;
I think this should be the equivalent in Java (side note, I really don’t like the Java APIs):
Mat objectSpaceTranslationVector = new Mat();
Core.gemm(rotationMatrix, translationVector, -1.0, new Mat(), 0.0, objectSpaceTranslationVector);
However, we have found the values in the translationVector
from the Java API to be more or less accurate (See disclaimer at end of post).
For the rotation vector, SolvePNP gives you a compact representation Rodrigues notation. To convert this into Euler Angles (ROLL, PITCH, YAW), we do the following:
Mat rotationVector = new Mat();
Mat translationVector = new Mat()
Calib3d.solvePnP(mObjectPoints, imagePoints, mCameraMatrix, mDistortionCoefficients,
rotationVector, translationVector);
Mat rotationMatrix = new Mat();
Calib3d.Rodrigues(rotationVector, rotationMatrix);
Mat projectionMatrix = new Mat(3, 4, CvType.CV_64F);
projectionMatrix.put(0, 0,
rotationMatrix.get(0, 0)[0], rotationMatrix.get(0, 1)[0], rotationMatrix.get(0, 2)[0], translationVector.get(0, 0)[0],
rotationMatrix.get(1, 0)[0], rotationMatrix.get(1, 1)[0], rotationMatrix.get(1, 2)[0], translationVector.get(1, 0)[0],
rotationMatrix.get(2, 0)[0], rotationMatrix.get(2, 1)[0], rotationMatrix.get(2, 2)[0], translationVector.get(2, 0)[0]
);
Mat cameraMatrix = new Mat();
Mat rotMatrix = new Mat();
Mat transVect = new Mat();
Mat rotMatrixX = new Mat();
Mat rotMatrixY = new Mat();
Mat rotMatrixZ = new Mat();
Mat eulerAngles = new Mat();
Calib3d.decomposeProjectionMatrix(projectionMatrix, cameraMatrix, rotMatrix, transVect, rotMatrixX, rotMatrixY, rotMatrixZ, eulerAngles);
double rollInDegrees = eulerAngles.get(2, 0)[0];
double pitchInDegrees = eulerAngles.get(0, 0)[0];
double yawInDegrees = eulerAngles.get(1, 0)[0];
Inches. We define the bottom right corner of the left vision as the origin, y axis is positive going up vertically, z is positive facing out, and x is positive to the right.
DISCLAIMER:
At the time I mentioned we were getting reasonable results, that was with minimal testing all very close to the goal (within 4 feet) and with little yaw. Upon further experimentation, there are still a few kinks we need to work out. Namely, we experience rapid jumps from relatively small changes in image points. For example, when we place the robot 5 feet back and 2 feet to the right with 0 yaw, we read the following image points and compute the following poses:
{48.0, 138.0}
{40.0, 136.0}
{45.0, 114.0}
{54.0, 114.0}
X = 2.4235989629072314
Y = 1.2370888865388812
Z = 4.717115774644273
ROLL = -7.555688896466208
PITCH = 165.9771402205544
YAW = 1.5292313860396367
=============================
{48.0, 138.0}
{40.0, 136.0}
{45.0, 114.0}
{53.0, 114.0}
X = 2.864381855099463
Y = 0.9925235082316144
Z = 4.605675917036408
ROLL = -7.962130849477691
PITCH = 168.14583005865828
YAW = 6.697852245666419
=============================
{48.0, 137.0}
{40.0, 136.0}
{46.0, 112.0}
{53.0, 114.0}
X = -3.3067589122064986
Y = -0.2727418953073936
Z = 4.393018415532629
ROLL = -6.929120013468928
PITCH = -168.6014586711855
YAW = -59.587627235667476
So, take what we’re doing with a grain of salt. We had to shift priorities to other tasks last week, but are looking to hop back into this at the end of this week. Also, it sounds like guides directly from Limelight are going to be released. Hopefully, they can provide more insight.
Cheers,
Bart
I also wanted to point out something cool we discovered. You can run solvePNP in JUnit tests. If interested, to do this, add the following to the dependencies
in your build.gradle
file.
testCompile group: 'org.openpnp', name: 'opencv', version: '3.2.0-0'
Then, in the constructor to your test class, you can load the library using the following:
System.loadLibrary(org.opencv.core.Core.NATIVE_LIBRARY_NAME);
This is really convenient since we can take a bunch of measurement from known locations and modify our algorithm until it outputs values close to them. It also allows us to continue working on this without the camera or field elements.
Do you think if there was a way to get the corners of both pieces of tape with limelight dual target that SolvePNP would get better results?
mjbergman, dual cross-hair mode is very useful if your limelight has to be mounted off-center on your robot. The way dual cross-hair mode works is you can calibrate the cross-hair for two different distances. So you’d position your robot perfectly centered on the goal up close and calibrate once, then pull your robot out to as far away as you’d like to start tracking and calibrate again. An offset camera will have different calibration points for these two positions. As your robot drives closer, the cross-hair will automatically interpolate between these two points. We’ve been experimenting with a robot with using an offset camera to guide the robot up to the goal using this method with good results so far.
Hope this makes sense, we really need to write an article about this and other things for this year’s game but we’ve been really busy on some code updates so far.
I heard about that. Ours will be mounted in the center this year, but it is something to think about other years or other teams that add it as an after thought.
Any updates on when the SolvePNP Limelight update is coming out? Thanks so much for everything you have done so far with the Limelight!
@Hjelstrom, just checking back in on when there might be an update? Also, do you think it will be possible to use the built in solvepnp if the limelight is mounted sideways? I would assume we would just have to do our own transforms, but wanted to double check to see if there was any reason that wouldn’t make sense.
We decided to add a few more features to this release. We plan on releasing tonight!
@AlexDanielsen Your FOV is going to feel cramped if you mount at a 90 degree angle, and it is important to have the entire target in view for 3D/solvepnp routines. You will be able to make it work, but I would highly recommend mounting the camera normally if at all possible.
Thanks for the update and advice! I’ll talk with our hardware people, but their theme of the past month has been “there’s no space!” which is why we bought the limelight 2, so it could be mounted vertically in the little space left. Looking forward to the update!
@Bart_Kerfeld I had a question about your PointFinder: does the “top” and “bottom” of the contour mean that the top has the least Y value and the bottom has the greatest Y value, as the origin of the limelight is the top left?
Correct.
@Bart_Kerfeld @mjbergman92 @Strategos @rmaffeo @Karakorum
We released on-board solvepnp with more than 4 points this morning.
Awesome!! Thank you so much
I can not wait to try it tomorrow. Will definitely have to test. I can not contain my excitement!