Calculating Rotation Angle based on Camera Yaw

We have a Gloworm mounted on the front of our intake that gives us a Yaw value when it sees a power cell. During testing we had the robot rotate based on that value and noticed that we were overshooting the ball constantly.

What we believe is happening is that the camera is mounted 30 inches from the center of rotation, but the Yaw value is assuming the rotation is happening at the camera location. One of the students has come up with some formulas that I think will come up with the correct number of degrees, which is great.

First, are we totally off on our thinking?

Second, does anyone know if either Photonlib or Limelight have any fancy features that handle this automatically?

I’m not sure about that first question, but last year, I was in charge of programming the Limelight, and it was relatively simple to have it rotate and follow a power cell. The limelight would give you the X and Y position of the ball, and I passed the X position into a PID and set the setpoint to 0 and it followed with no problem.

Was your turning overshooting and then coming back or was it staying there and thinking it had “found” the ball?

I don’t think it would be feasible for the limelight or photonvision to correct for this without a lot of input parameters all specific to your robot. They are seeking to solve the vision problem and this really isn’t that.

We are only using the camera to get an initial Yaw and Pitch on the found power cell, calculating a degree of rotation and rotating to that position. We are not constantly tracking the ball throughout the rotation.

I guess what I was asking is did the turn take you to the commanded angle or did it take you to the setpoint and then momentum carried you past your setpoint?

The turn went to the commanded angle.

When you say that the camera is mounted 30 inches from the center of rotation, do you mean the camera is on a corner of your robot? 30 inches seems like a lot for a normal size robot. If so, the limelight does have a mode that can correct for this by measuring the offset at two distances and interpolating in between.

For these challenges we decided to mount a Gloworm on the front of our intake, which sits about 10 inches outside the robot perimeter. Most of the robot weight is at the opposite end and we have a drop center. Thanks for the link to the Limelight document. If that is a good solution we might just swap the Limelight and Gloworm on the robot.

PhotonLib’s PhotonUtils will let you do the following

        var camHeight = 1; // meters
        var targetHeight = 0; // meters 
        var camPitch = Units.degreesToRadians(0);
        var targetPitch = <call to PhotonCamera Result>;

        var  dist =
                PhotonUtils.calculateDistanceToTargetMeters(camHeight, targetHeight, camPitch, targetPitch);
        
        var camToBall = new Translation2d(dist, targetYaw);
        var robotToCam = new Translation2d(x, y);
        var robotToBall = robotToCam.plus(camToBall);
        var angle = new Rotation2d(robotToBall.getX(), robotToBall.getY());

Reference: Utility Class: Common Calculations - PhotonVision Docs

1 Like

I’ll also add that PhotonVision has a dual crosshair mode that’s just like Limelight’s; we call it “Robot Offset” because we think that’s a less confusing name than “Crosshair Calibration.” This is documented here.

The other (IMO easier) way to do it is with the transformation chain Matt posted; you can think about this chain as robotToCamera * cameraToBall = robotToBall (note the inner camera part matches and cancels; this is how these transforms work.) Then, once you have this transformation from the robot to the ball you have a right triangle whose legs you know the length of (these are the x and y components of robotToBall.) The new Rotation2d(...) bit does the atan2 for you internally and gets the angle from this triangle.

Note: in Matt’s code you’ll need to get the targetPitch and targetYaw variables from your camera through PhotonLib.

If this makes sense to you then we’ll probably go ahead and add it as an example to the PhotonVision docs. Let us know how it works out! As always, we’re happy to provide more real-time support on our Discord.

2 Likes

Thanks to both of you for these suggestions. I’ll get with the student (my son) who has been working on this and we’ll try it out. Great stuff.

I’m assuming the x and y is the offset distance of the camera in meters from the center of the robot, correct?

Correct. You can use different units if you’re consistent (so robotToCam and camHeight and targetHeight must be in the same units.)

I’m confused about how this works. How did you get the y (lateral) distance from the camera to the ball? You used targetYaw in the camToBall constructor, but that sounds like an angle, not a distance.

This explains it pretty well

1 Like

That does, and I’m familiar with that method of estimating distance. In this example though, that method is only being used to find the x/longitudinal distance from the camera to the ball, dist. I’m wondering about the y/lateral distance.

Hmmm, here’s another picture to align with matt’s code - does this help?

2 Likes

That makes sense, I’m not too familiar with photon vision and mistakenly assumed ‘dist’ was the x leg rather than the hypotenuse because I saw pitch was used to find it. That’s my bad, thanks for the help all.

1 Like