Angle of bot in relation to peg

Hey!

I’m working on a vision project for my team and am using last year’s Steamworks challenge to start. I’ve hit a roadblock regarding finding the angle my robot is in relation to the gear. I’ve been able to calculate the distance I am from the peg as well as the lateral distance I am from it, but I am not able to figure out the angle accurately. Our team has swerve drive so knowing the angle matters and we should be able to correct based on that.

Thanks

Our code has detail comments on the math:


    public TargetInfo getTargetInfo()
    {
        TargetInfo targetInfo = null;
        Rect targetRect = getTargetRect();

        if (targetRect != null)
        {
            //
            // Physical target width:           W = 10 inches.
            // Physical target distance 1:      D1 = 20 inches.
            // Target pixel width at 20 inches: w1 = 115
            // Physical target distance 2:      D2 = 24 inches
            // Target pixel width at 24 inches: w2 = 96
            // Camera lens focal length:        f
            //    W/D1 = w1/f and W/D2 = w2/f
            // => f = w1*D1/W and f = w2*D2/W
            // => w1*D1/W = w2*D2/W
            // => w1*D1 = w2*D2 = PIXY_DISTANCE_SCALE = 2300
            //
            // Screen center X:                 Xs = 320/2 = 160
            // Target center X:                 Xt
            // Heading error:                   e = Xt - Xs
            // Turn angle:                      a
            //    tan(a) = e/f
            // => a = atan(e/f) and f = w1*D1/W
            // => a = atan((e*W)/(w1*D1))
            //
            double targetCenterX = targetRect.x + targetRect.width/2.0;
            double targetXDistance = (targetCenterX - RobotInfo.PIXYCAM_WIDTH/2.0)*TARGET_WIDTH_INCHES/targetRect.width;
            double targetYDistance = PIXY_DISTANCE_SCALE/targetRect.width;
            double targetAngle = Math.toDegrees(Math.atan(targetXDistance/targetYDistance));
            targetInfo = new TargetInfo(targetRect, targetXDistance, targetYDistance, targetAngle);

            if (debugEnabled)
            {
                robot.tracer.traceInfo(
                    moduleName, "###TargetInfo###: xDist=%.1f, yDist=%.1f, angle=%.1f",
                    targetXDistance, targetYDistance, targetAngle);
            }
        }

        if (targetFoundLED != null)
        {
            targetFoundLED.setState(targetInfo != null);
        }

        if (targetAlignedLED != null)
        {
            targetAlignedLED.setState(targetInfo != null && Math.abs(targetInfo.angle) <= 2.0);
        }

        return targetInfo;
    }   //getTargetInfo

Full repo can be accessed here.
https://github.com/trc492/Frc2017FirstSteamWorks

The key thing to measure is the aspect ratio of the reflective area of the tape. If you are square with the target, it will have a ratio similar to the published ratio. As you move to either side, the width will decrease relative to the height. If you’re a good bit off to one side, you can figure out whether you’re to the right or left by checking which side of the target is larger (presuming a basically rectangular target area) - the larger side is the nearer side.

You shouldn’t need to us vision to determine the robot’s angle relative to the spring. As long as you know WHICH spring you are near, you should be able to calculate the angle from a gyro. Even if you don’t know which spring you are near, there are only 3 choices, so you should be able to determine it from your image and the your gyro.

This is my best answer.
If you have used the target’s height to computer distance to the target and are looking at the target mostly head on, you should begin by setting up a triangle with the target of width W as one side and the camera as the corner opposite. The angle between the two sides that enter the camera corner is known to you, and can be computed, we can call it A. Draw the distance to the target(D) as a line from the midpoint of the width side to camera corner. You are looking for the angle between this line and the side of length W, we’ll call this angle Theta, which is our goal to find.

Setup three equations, using the Law of Cosines for each of the three triangles you have made. Though we don’t know the lengths of the two outside legs of the largest triangle, we don’t need to. You should have three equations three unknowns. Solve for theta, eliminating the two leg lengths.

The result I get is:
Theta=ACOS(SQRT((D/W)+(W/(4D))]^2-{(D/W)-(W/(4D))}/cos(A)]^2))

As to which side of the peg you are on, GeeToo has a good point.

Not sure exactly what you are looking for, if you are looking for the angle to the peg this is my best answer.
If you have used the target’s height to computer distance to the target and are looking at the target mostly head on, you should begin by setting up a triangle with the target of width W as one side and the camera as the corner opposite. The angle between the two sides that enter the camera corner is known to you, and can be computed, we can call it A. Draw the distance to the target(D) as a line from the midpoint of the width side to camera corner. You are looking for the angle between this line and the side of length W, we’ll call this angle theta, which is our goal.

Setup three equations, using the Law of Cosines for each of the three triangles you have made. Though we don’t know the lengths of the two outside legs of the largest triangle, we don’t need to. You should have three equations three unknowns. Solve for theta, eliminating the two leg lengths.

The result I get is:
theta=ACOS(SQRT((D/W)+(W/(4D))]^2-{(D/W)-(W/(4D))}/cos(A)]^2))

As to which side of the peg you are on, GeeToo has a good point. With two separate particles, the taller is going to be the closer side.

Actually, you would need to know a bit more than that. Such as were, precisely, your robot is. Which is rather difficult – it can be done, but what if a wheel slips? What if there’s a mechanical failure in your drive train? What if your robot was set up slightly wrong? That’s when vision is useful.

Not sure exactly what you are looking for, if you are looking for the angle to the peg this is my best answer.
If you have used the target’s height to computer distance to the target and are looking at the target mostly head on, you should begin by setting up a triangle with the target of width W as one side and the camera as the corner opposite that side. The angle between the two sides that enter the camera corner is known to you, and can be computed, we can call it A. Draw the distance to the target(D) as a line from the midpoint of the width side to camera corner. You are looking for the angle between this line and the side of length W, we’ll call this angle theta, which is our goal.

Setup three equations, using the Law of Cosines for each of the three triangles you have made. Though we don’t know the lengths of the two outside legs of the largest triangle, we don’t need to. You should have three equations three unknowns. Solve for theta, eliminating the two leg lengths.

The result I get is:
theta=ACOS(SQRT((D/W)+(W/(4D))]^2-{(D/W)-(W/(4D))}/cos(A)]^2))

As to which side of the peg you are on, GeeToo has a good point, with targets this year the closer one will be bigger.

I read the original request as looking for the angle between the centerline of the robot and the centerline of the peg, for which you need to know your robot’s field-relative angle, but not where precisely it is. If your gyro is telling you you are at 66deg relative to your starting position (which in this game is most likely parallel with the field) then you know you are at 6deg relative to the left peg, 66deg relative to the center peg, and 126% relative to the right peg. If you can SEE a peg, then odds are very high it’s the left peg–since few camera have a FOV that would allow them to see an object at 66deg or 126deg.

Now, if the OP was not looking for the robot’s angle relative to the peg, but rather the offset angle of the center of the vision targets from the robot, that’s a different set of math–that’s available in countless CD threads and in many teams’ open-sourced vision code. Our function for that (getAngle) is available in our github.

One of the best resources on vision is 254’s intro.

This is not quite true. If you want to check the angle of the peg relative to the robot, chances are you are not checking it at your initial starting position. Your robot probably has moved forward a distance and trying to figure out how much you should turn to “face the peg” after that. In theory, if you parked your robot precisely at a determined position and the robot precisely moved forward some distance, you could calculate the angle you have to turn. Doesn’t even need the camera. However, it is almost impossible to park the robot precisely at a pre-determined position every time. In addition, sensors such as encoders do have errors (e.g. wheel slipping… etc) and also your PID control or Motion profile drive will not quite precisely stop at the distance you would like. So there are all these cumulative errors that your calculation has to deal with. Vision feedback is a way to make these errors not matter too much.

I am a bit late here, a nice long CD-break since Champs…
But this still seems to be a very relevant topic as teams prepare for Fall-off-season events.

I certainly agree with BenBernard back in post#4, it is far easier to determine parallel vs spin-left/spin-right based on a predetermined PEG/GYRO difference.

Yes this is absolutely true, the auton() needs to know which side its approaching, and then it WILL know if it is parallel or not (without any vision logic). BTW, this awareness can be built into teleOP-vision logic too.

The other big challenges remain, I think the latter below is exactly mikets point about the where vision adds the most value.

  1. How close are is the airship?

==> Use Vision-FOV distance calcs. However as you get close you may need to ‘target’ the gap between contours rather than contour heights, in case the contours are slipping outside of the image/frame.

==> Or you can look into the tinyLIDAR or equivalent options.

  1. Are we approaching head-on, vs left-or-right and needing to strafe.

==> for this you already need to know your gyro-approach offset (post 4 again…), and calculate where the target-center is vs where it should be.

Our team did a similar strategy to this and had a look up table that found the nearest peg angle. Then would then apply the vision algorithm with the robot head on the to target which increases the reliability of our vision algorithm because we did not have to account to the angle. The angle was obtained from a gyro and checked against the predicted angle of the robot from our swerve kinematics. All that was needed was to add a rough calculation for which way to move while turning to be flush with the peg. This would make sure we would keep locked on the Target and not loose it as the control loop turned the robot. All in all it was effective but hastily written in the pits and could be improved greatly.