Complicated turret feedforward

I think I understand your concern. The turret would not be able to move even if it were offset if the bot wasn’t moving since the pidControl would be limited to 3 volts. Honestly, I think your solution is a little overcomplicated. (or maybe I’m just underthinking it) I would suggest trying to rewrite your code to make it a little simpler. There are 2 ideas I can think of. For positional control, try:

  1. Finding the angle the turret should go to (seems you already have a good start with your vision tracking math)
  2. Converting that into motor ticks (angle / 360 (or 2pi) * encoder_resolution)
  3. Passing that value into a positional control system (the talonfx has a built-in system for this, try taking at look at our code if you need some inspiration (either look at the climber.cpp in obob’s code, or the positional control in swervemodule.cpp in any of our black widow repos) CTRE also has some decent documentation.

Make sure to add a ramp rate and limits to make sure you don’t break your turret

For velocity control, you could do:

  1. Finding the angle the turret should go to
  2. Using a PID controller to convert angle to velocity (either a wpilib one, or something you implement yourself, like we did here):
    int error_theta = (theta - getAngle()).to<int>() % 360; // Get difference between old and new angle && gets the equivalent value between -360 and 360
    if (error_theta < -180) error_theta += 360; // Ensure angle is between -180 and 360
    if (error_theta > 180) error_theta -= 360; // Optimizes angle if over 180
    if (std::abs(error_theta) < 5) error_theta = 0; // Dead-zone to prevent oscillation
    double p_rotation = error_theta * rot_p; // Modifies error_theta in order to get a faster turning speed
    if (std::abs(p_rotation) > max_rot_speed.value()) p_rotation = max_rot_speed.value() * ((p_rotation > 0) ? 1 : -1); // Constrains turn speed
  3. Converting to actual motor control (feed forward? talonfx velocity control loop?)

Best of luck!

Actually I just realized I could just share our turret tracking code lol:

SetTarget/Reference is a REV thing but you could use position control for CTRE

Edit: This guide also looks pretty nice: Case Study: Aiming Using Vision — Limelight 1.0 documentation

Ah, yeah, my earlier post was in error.

You can indeed treat the position PID controller as outputting a reference velocity and cascade that to the feedforward. This changes the units accordingly, and the benefit is a correct application of the kS term in the case when the other feedforwards are zero or cancel each other out.

Ok nice. Do you think tuning this feedforward accurately would correct for the problem of camera latency (described here) as long as I kept my kP low, or should I be implementing that separately?

If I am moving fast, my pid will effectively be fighting against the feedforward for those ~25ms until I receive my next limelight image. Because the pid wants to go back towards the old target while my feedforwards “knows” the target is actually somewhere else and so is moving towards the actual target. Thoughts?

Feedforward might help the overall system response enough that latency compensation is not necessary, but I don’t think I can really predict whether that’ll be the case or not. You’ll have to try it and see!

1 Like

Slightly off topic, but are you planning to use a Limelight to track the target for the whole match? In 2020, our local officials interpreted R.203.m to mean our Limelight could not stay illuminated for the whole match.

high intensity light sources used on the ROBOT (e.g. super bright LED
sources marketed as ‘military grade’ or ‘self-defense’) may only be
illuminated for a brief time while targeting and may need to be shrouded to
prevent any exposure to participants. Complaints about the use of such light
sources will be followed by re-inspection and possible disablement of the
device.

I think the key here is “high intensity” - to me I’m interpreting this as referring to lights with an blinding level of glare from a relatively wide angle. The references to “self-defense lighting” suggest it is the kind of brightness and field of view that can quickly hurt people and create lasting safety issues. The Limelight certainly is producing a useful, bright light, but not on the same magnitude as these products.

In past years, most notably 2012, teams would use very high intensity lights to allow drivers to manually align their robots with goals. This rule was created to require teams to be able to switch these lights off.

That was the plan. I am however currently working on using odometry to tell the turret approximately what angle to goes towards even with the limelight off. And then when we are about to shoot we can just turn it back on, and adjust from there. So if I can get that working then we’ll just switch to that if the referees get mad at us.

I would have the same interpretation, in fact, I made the same argument on this thread during the 2020 build season. However, at the Grand Forks regional the inspector specifically made sure our Limelight turned off when not preparing to shoot.

Please post back on how well this goes. Our build team is still building our turret so we have not programmed it yet. We were thinking of doing exactly what you’re describing. We will have a Pigeon IMU on the turret and we can use a Heading for position control. But we don’t know how much odometry will drift over the course of a match, so this might not work very well.

You can use the wpilib pose estimation classes to account for this somewhat. Basically they work like normal odometry but you can incorporate vision measurements into it to correct for the drift in the drivetrain measurements. Every time the limelight sees the target, it will reset the robot pose based on the distance from the target and all that. So you would only get drift if you had the limelight off or weren’t looking at the target.

I read about the pose estimator, but it requires you to estimate your pose based on the vision target. Do you have a strategy to estimate your pose by looking at a circular vision target that’s the same on all sides? I can see how this would have been useful in 2020 if you knew which alliance target you were looking at, but in 2022 the target looks the same no matter what angle you view it from. You can estimate your distance from the target, but that gives you a circular range, not a single point.

1 Like

No irregularly shaped target required! This is the code I have so far, although I haven’t been able to test it or verify anything yet:

public static Pose2d getGlobalPoseEstimation(double distance, double gyroAngle, double turretAngle, double tx) {
        //wpilib conventions
        return new Pose2d(
          distance * Math.cos(Units.degreesToRadians(gyroAngle + turretAngle - tx - 90)),
          distance * Math.sin(Units.degreesToRadians(gyroAngle + turretAngle - tx - 90)),
          Rotation2d.fromDegrees(gyroAngle));
}

Basically it calculates the angle between the robot and the global x axis (pointing towards the opposing team’s driver station) using a gyro, turret encoder, and the angle between the turret and the target (aka the limelight tx value). Then it uses this angle and distance like polar coordinates to get the x and y position of the robot, with the goal as the origin (0, 0).

Now that I look at it again, I’m not sure gyroAngle + turretAngle - tx - 90 is the correct combination of the angles, but you get the gist.

Edit: obviously this doesn’t fix the gyro drift issue because it is still using the gyro angle in the calculation, which is probably what you were worried about originally. But it does correct for any wheel slip you get, which I was mainly concerned about since we are doing mecanum this year.

On the gyro drift though, we are using a NavX and this says that we’ll get 1-2 degrees of drift over the course of a match. So let’s say that the turret encoder gets 1 degrees of drift over the same time (I tested it and it seemed very accurate), meaning you’ll have up to ~3 degrees of drift in the angle calculation, assuming the turret encoder and gyro happen to drift in the same direction.

Let’s say you are 15 feet from the target, this 3 degrees would result in an average of about 6 inches of error in the pose x and y calculations. Maybe someone who knows more about the odometry could comment on the drift you would get using only that. Also, the pose error would decrease you as got closer to the target, and most teams seem to be shooting from up close.

Notably, I don’t think this drift would compound like it would using just drive odometry, because the next pose for the vision method is not directly based on the previous pose like it is for driving.

Edit2: I just did the calculus on the average pose x and y error you would get for any angle with an error of 3 degrees and I got:

average x error = average y error = 0.033330 * distance

where the units are the same as your distance units.

So at 15 feet, you get an average error of plus or minus 0.4999 feet, or ~6 inches. Seems my guesstimate was very good! Although remember that at some angles, the error will be more and some less

2 Likes

I like the idea, post back once you try it out. We’re mounting a gyro on the turret, so that would simplify the calculation a little.

Don’t forget that your distance parameter needs to account for the fact that the vision target is a ring. If you use the target itself as (0, 0) it will move as you rotate around the target. You’d need to calculate the distance to the target and then add the radius of the vision target ring (4 ft. 5⅜ / 2).

Yeah sure, it will probably be a week or two though

Also if you’re curious about about how to approximately track the target when the limelight is not on, I just made this code:

public void setTurretMotorPosPID(Pose2d robotPose, double perpV, double angV) {
    double estimatedDistance = Constants.goalLocation.getDistance(robotPose.getTranslation());
    tangentialFeedforward = perpV / estimatedDistance;
    rotationalFeedforward = -angV;

    double posSetpoint = Math.atan2(robotPose.getY() - Constants.goalLocation.getY(), robotPose.getX() - Constants.goalLocation.getX());
motorTurret.setVoltage(simpleFeedforward.calculate(posPIDController.calculate(getTurretAngle(), posSetpoint) + tangentialFeedforward + rotationalFeedforward));
  }

where

Translation2d goalLocation = new Translation2d(0, 0);

You can read more about the feedforward stuff if you scroll up

@Oblarg sorry to bother you for like the 10th time this week, but going back to the original point of the thread, do you think I would be better off with a motion profile + feedforward over the PID + feedforward for having the turret track the target while driving around? Do motion profiles theoretically even work when there is a constantly changing setpoint?

How do you know your robot velocity if you are on mecanum wheels? How are you getting tangential velocity?

Ignoring the fact that shooting while moving is only worth it if your robot is really good at everything else, a few questions on how fancy your controls really need to be.
Do you make every shot when robot is static from the ranges you’re looking at shooting while moving? Adding fancier controls to a mechanically inaccurate system is not going to help.
Have you tried shooting while moving without compensating for robot velocity yet? If so, how fast did you move before you missed?

To simplify things, if you know tangential velocity it may be easier just to do an interpolated look up table for the turret position offset based on robot tangential velocity and then just calibrate your table by taking a bunch of shots. Theoretical is great, but in my experience the ideal math probably won’t actually make the ball go in the upper hub anyway.

Do you have a hood? are you compensating for radial velocity as well?

If this is an exercise in understanding control loops and whatnot then you can safely ignore me.

using MecanumDriveKinematics

Doing a rotation of axes on the robot-relative velocities using the turret angle to the target

We have tested prototypes shooters but we are still finishing up building the final one and I hope to be able to play with it tomorrow.

No, but it’s not going to work without compensating for it unless we are moving very slowly.

Let’s say the ball takes two seconds to reach the target, and that if we were not moving, the shot perfectly lands in the center of the ring. If the robot is moving just two feet per second, the ball is going to miss the center of the goal by 2 fps * 2 s = 4 ft (ignoring air resistance)

I have already tested the tangent feedforward and it works pretty well, but good idea. We’ll see if I need it

We are using top and bottom flywheels, so we can control the amount of spin. We are gonna find what shooter velocities (top and bottom flywheels) work best for each distance and use an interpolating table

It isn’t

A motion profile is basically a guarantee that you don’t accidentally ask the turret to do something that it’s not capable of. It’s almost always going to result in better control.

What you probably want to do is to create a new profile from the current measured position to the desired setpoint every iteration, execute one step of that profile, and repeat. There’s a WPILib example that does something similar (though for some reason it doesn’t use the current measured state…) you can look at for a rough idea. Alternatively, you can use ProfiledPIDController which does this internally.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.