Feedforward for shooting while moving

I have been thinking a bit about moving while shooting and trying to get it functioning on our robot. I just had some questions.

Current Algorithm

The method we have had experience with most is predicting forwards in time based on the time taken to shoot the note and the pose in the future based on that time. The algorithm in pseudocode is basically:

running_pose = get current pose
repeat 5 times {
    current_robot_distance = get distance to speaker from running_pose
    future_time = get the time it takes to shoot a note current_robot_distance
    next_robot_pose = future_time * (get current velocity) + (get current pose)
    running_pose = next_robot_pose
}
spinup and aim as if you were at running_pose

I am pretty sure that this algorithm should work, as it basically converges to a pose at a time in the future where shooting from that far in the future should take that same amount of time.

Does this algorithm make sense?

Delay Issue

An issue we have had is that even if this algorithm were to work, our robot is pretty heavy and also pretty underpowered acceleration wise (for some reason we’re running L4s on a max weight robot), and so without some sort of feedforward, PID constantly lags behind in terms of our robot rotation. Does anyone have a theoretical way of providing some feedforward to our robot so it doesn’t drag as far behind rotation wise?

2 Likes

After thinking about this a bit longer, the kind of feedforward I would be thinking about for a feedforward, but much more simplified would be a feedforward that would keep your robot facing towards the speaker no matter your speed inputs.

Just using PID in response to your current angle to the speaker would always necessarily result in some delay, as it requires error to actually start having an impact. This means you either get delay because your P is low or unstable because your P is high.

A feedforward for this would look like finding your radius from the speaker, and then finding your velocity perpendicular to that radius. Dividing that velocity by the radius should give you a number in radians per second which is essentially the rate at which your robot is rotating around the speaker. You could then plug that into the rotation for the robot as a rotation velocity feedforward, and then use a much smaller, stable PID loop.

Additionally, since angular velocity here is approximately proportional to velocity, you wouldn’t have to worry about angular acceleration being impossibly high compared to linear acceleration. (AKA it would probably lead to feasible goals.)

I’m not sure if there would be a great way to do this for shooting while moving where you explicitly need to aim at an offset from the goal. I guess if you were moving in a circle at exactly the same radius from the goal with no change in velocity it would work though.

It depends what kind of offset this is. Aim at a different point on the field? Just use that point instead of the speaker. Aim some angular offset away from facing the goal? Just add that to the setpoint.

We did this feed-forward to our chassis rotation controller to make up for that.

face to target

In brief, given the target’s position, the robot’s position, and its velocity, we calculate the rate of change of the robot’s desired rotation to maintain alignment with the target.

original source

/**
 * <h2>Calculates rotational correction speeds for a face-to-target request.</h2>
 *
 * <p>For a continuously changing target, feed-forward velocity is added to help the chassis stay aligned. This
 * feed-forward is based on the target's angular velocity relative to the robot, improving tracking accuracy.
 *
 * @param measuredSpeedsFieldRelative the measured chassis speeds, field-relative
 * @param robotPose the current pose of the robot as measured by odometry
 * @param targetPosition the target position to aim at
 * @param shooterOptimization optional {@link MapleShooterOptimization} for shooting-on-the-move functions
 * @return the calculated chassis angular velocity output, in radians/second
 */
private double calculateFaceToTarget(
        ChassisSpeeds measuredSpeedsFieldRelative,
        Pose2d robotPose,
        Translation2d targetPosition) {

    // Target velocity relative to the robot, in the field-origin frame
    final Translation2d targetMovingSpeed = new Translation2d(
            -measuredSpeedsFieldRelative.vxMetersPerSecond, 
            -measuredSpeedsFieldRelative.vyMetersPerSecond);
    
    // The targeted facing of the robot to face to the target
    final Rotation2d targetedRotation = targetPosition
            .minus(robotPose.getTranslation())
            .getAngle();
    
    // the direction at which the target is moving
    final Rotation2d targetMovingDirection = targetMovingSpeed.getAngle();
    // the line tangent to the line connecting the robot and the target
    // moving in this direction will cause a positive rotation speed
    final Rotation2d positiveRotationTangentDirection = targetPosition
            .minus(robotPose.getTranslation())
            .getAngle()
            .rotateBy(Rotation2d.fromDegrees(90));

    final double tangentVelocity = targetMovingDirection
            .minus(positiveRotationTangentDirection)
            .getCos()
            * targetMovingSpeed.getNorm();

    final double distanceToTarget = targetPosition
            .minus(robotPose.getTranslation())
            .getNorm();

    final double feedForwardAngularVelocity = tangentVelocity / distanceToTarget;

    ... // other part of the code not shown
}
4 Likes

We encountered a similar issue in the past and resolved it by calculating the angular velocity of the chassis, as @catrix mentioned(A faster method for calculations could use the dot product). Additionally, we found that merely compensating for the angular velocity was insufficient. To improve our approach, we estimated the robot’s future position for the next tick and calculated the velocity based on that estimation. Our results: https://youtu.be/wNvhfw16LfI?si=ZMIR-0AH3N8Su7zf

3 Likes

Those shots while turning look awesome. Is your codebase public by any chance?

We don’t do shooting on the move, so I’m here to learn more than anything, but…

Is it really sufficient just to calculate where you’ll be at a future point in time (presumably when the note is released) and aim like that? That only accounts for a difference in position. Don’t you also need to account for velocities that’ll be added to the note? For example, moving towards the speaker and shooting at a given point could/should result in a different target point than moving away from the speaker and at shooting at the same point.

From what @Geoffrey8814 it sounds like they’re doing both

Correct. A moving robot imparts “additional” initial velocity to the projectile. Towards a fixed target. However, all frames of reference are equivalent. In particular, a frame of reference where the robot is stationary and the target is moving, may be easier to think about. Be sure to account for how far the target moves during the projectile’s time of flight.

We tried this back in 2022 and our biggest issue was we had only a single camera mounted on our turret. Unless you are heading straight at the target, when aiming on the move, the turret is no longer pointing straight at the target and the faster you go and the more perpendicular your travel, the greater your turret is skewed. We found at high speeds there were situations where we no longer could get a good view of the target so we would lose our position relative to the target.

That same year, 1690 had a similar but far more effective system—my (possibly faulty) recollection is they somehow maintained knowledge of their position on the field w/o any cameras.

Over the summer, 1690 held two software sessions, one of which where they discussed in length their unique approach to on-the-fly shooting.

1 Like

What you describe isn’t really a feedforward, but it is a valid way to adjust aiming for current robot motion. I’d advise replacing the hardcoded 5 recursions with a recursion until the estimate changes by less than some threshold (with maybe a cap at 5 recursions). Someone on the FRC discord worked out the stability criterion for this approach some years ago and it was actually pretty robust.

1 Like

Thank you for the recommendation of the limit when we get close enough! I’m pretty sure this method is stable as long as the average velocity of the projectile stays above the velocity of the robot.

Here’s my reasoning for why the method would be stable at least:

Since this is a repeated application of a function, you can use the Banach fixed point theorem to see that if applying the function to two points (in our case, guessed times in the future) brings them closer together you are guaranteed it will eventually converge. The function we are applying (finding the distance t seconds in the future then finding the time to shoot for that distance) could be represented as a function f\circ g where

g(x) = \sqrt{(vx+b)^2+d^2} = distance from goal x seconds in the future where v is robot velocity, d is the closest distance the line you are on makes with the goal, and b is how far you are from that closest point to the goal

f(x) = how much time it takes to shoot a projectile to the goal from a distance of x.

According to the Banach fixed point theorem, as long as d((f \circ g)(x),(f \circ g)(y))\leq qd(x,y) where 0 \leq q < 1, repeatedly applying f \circ g should converge. In other words, if f \circ g is a contraction mapping, then repeated applications will converge.

I have a very strong feeling that for an arbitrary differentiable function h where |h'(x)| < q where 0 \leq q < 1, then h must be a contraction mapping. This stack exchange post also makes me think this is true.

From the chain rule, (f \circ g)'(x) = f'(g(x))g'(x), so we just need to show that |f'(g(x))g'(x)| < q where 0 \leq q < 1. Lets call this proposition A.

Splitting up the absolute value, we get |f'(g(x))||g'(x)|<q.

Upon some inspection with Desmos and Wolfram Alpha, it is pretty clear that \lvert g'(x)\rvert < v.

From this, we can say that as long as |f'(g(x))||v| < q then A is true.

A set of functions f where this would always be true is all the functions f where f'(x) \leq \frac{1}{\ell} where \ell > v. Plugging this in with our previous equation we can see that if |\frac{1}{\ell}||v| < q then A. Since \ell > v, \frac{v}{\ell} \leq q < 1, so we know A is true.

A function f that satisfies the requirement is f(x) = \frac{x}{\ell}, which is literally just calculating the time it would take to shoot a projectile into the goal if it were going at a constant velocity \ell. Therefore, as long as your note velocity is faster than your robot speed, the algorithm should converge. The case where it does not converge would be the possibility that your robot is moving so fast away from the goal that there literally is no way to shoot fast enough for the note to move towards the goal.

Sorry for the yap, I just thought this was interesting.

2 Likes

I apologize for the confusion. Our shooting-on-the-move algorithm is similar to that of Team 1690, as we subtract the robot’s velocity from the desired velocity of the note. I believe the effectiveness of Team 1690’s algorithm stems from their robust localization. They utilized a pinhole system, which eliminates tag ambiguity. In contrast, although we have used four cameras, our localization is much slower due to tag ambiguity in the solvePnP algorithm.

1 Like

Unfortunately, our robot code is not publicly available; however, our custom vision system is open to the public.:

We plan to release our robot code next year, along with some exciting things we are developing.

1 Like

I think there are two approaches to address that.

1. 1690’s approach

In their approach, the shooter optimization outputs a 3D vector yielding the most optimized note velocity for a static shot at the current position. We subtract the robot’s 2D velocity from that 3D vector to get the optimized note velocity for an on-the-move shot. We use this new vector to determine the desired robot yaw, shooter RPM, and shooter angle for the shot.

2. The more common approach

Most teams’ shooter optimization doesn’t even know the exact note speed in m/s. Instead, we linearly interpolate flywheel RPM and shooter angle according to distance. So, a more straightforward approach would be multiplying the robot’s velocity by a small period (say, 0.5 seconds) and calculating shooter optimization from that future pose. That period should equal the note’s flight time and be approximately proportional to the distance. This approach is often called the “shooter look-ahead.”

3 Likes

As seen on Mythbusters: https://m.youtube.com/watch?v=ZH7GpYJoptU

1 Like

Yeah… That’s a pretty much it. Except for the fact the the initial vector of the shot (before subtracting the robots vel) is not optimized per say. Because of the little roof the speaker had this year, the optimal speed of the note would be infinite, so we just take a speed that is good enough and then generate a vector from that speed. But yeah, just subtracting our robots vel from the shot vector.

As to our 2022 algorithm, it really is two algorithms. First the radial axis, which we did a shooting simulation because our shooter had spin on the ball that we couldn’t control (the bottom and top wheels were connected by some gears). The spin of the ball changes it’s trajectory in the air, so we can’t just subtract the vector of the robots vel cause the spin will be different at different RPMs, so we had a simulation to handle the radial axis.
Then in the tangential axis it’s the same algorithm from this year, just subtract the robots vel from the shot vel.

We don’t like to deal with time, and accounting for the time it takes for the note to leave the robot, or even harder to determine, the time it takes the note to travel in air.
So yes, as has been said in this thread, our robust localization allows us to be consistent, but also the fact that we don’t deal with time is a big factor.

5 Likes

There is a way to determine it, though. We recorded a video of the robot shooting at 120fps. Then, we took the time between when the LED started to blink and when the note hit the speaker; that would be the “look-ahead” time. This isn’t NASA-level accuracy, but it’s sufficient for shooting at something from less than 4 meters away.

2 Likes