Turret Example Code

Does anyone have a good example of programming a turret in java? I’m interested in learning how to program one for a possible off-season project. I would prefer to have something that uses talons and maybe a limelight, but anything would help my understanding.


254’s 2016 robot featured a turret with vision capabilities. No Limelight, but the principles are the same. The superstructure has the methods that use the Turret (ie. you tell the superstructure what you want it to do, then the superstructure tells the turret what it needs to do). They used Talon SRXs and an Android phone for vision.

A well programmed turret in my mind would have the Talon’s magic motion set up for locking on given angles, and getting the angle to the target calculated with the information from the Limelight (ie. target x, target y). A simple PD or even a P controller can work just as well on the turret. Feedback from an encoder connected to the driving gearbox of the turret would be used to calculate the setpoints, and you’d have some conversion between encoder ticks and turret angle.

1 Like


Motor output = PD(limelight error)

I’d start there and see how far it gets you.

I believe there are some example implementations on the limelight website as well.


When you start on the turret, you have to decide if you want to have the limelight fixed on the robot, or if you want it to rotate with the turret. For our custom vision setup, we found that the camera was too unstable when mounted on the turret while our flywheel was on, so we went for a fixed spot on the robot.

Putting the camera/limelight on a fixed spot on the robot is arguably more difficult programming wise because you can’t just do motor output = PD(limelight error). If you went for a fixed spot on the robot, your turret would need an encoder, and you would have to determine the angle to the target using the the limelight image, which I believe there’s already a way to do that out there.

So once you calculate the angle to the target relative to the robot, you just set your turret’s desired angle to the same thing, and then let a PID loop take over. Ideally you’d have something like this: turret.setDesiredAngle(vision.getAngleToTarget()).

So then would you suggest writing it to move to a target angle and set the reference as the current angle adjusted with the offset from the vision processing or try to PD vision angle to 0 (as long as not hindered by wiring limitations)? I’m thinking the first would be better, because it would leave the option open for writing something to lead the target based on speed and make it simpler in auto.

1 Like

Did you balance your wheels?
How is the camera mounted, and what camera is it?
It may benefit from some isolation mounts.

Can confirm, definitely a problem with shooter-wheel-mount-sharing cameras. Still an open issue on ours. Certain wheel speeds induce some real funky looking patterns when they play with the rolling shutter on the camera itself.

We’ve been using ninjaflex as part of the mount to help mitigate, and it does help… though it’s still an open action item to mitigate it.

Due to when we’re using our camera it currently doesn’t matter… but a “next-step” in performance is going to require mitigating it.

The most bang for your buck is going to be in eliminating the source of vibration rather than mitigating it.


+1. In the referenced thread there were indications that balancing a wheel is difficult. IMO it’s easier to balance a shooter wheel than it is to come up with a dampener so that you can live with vibration that you can eliminate by simply balancing a wheel.

It’s off season time, the time to come up with a nice, easy to make tool to balance shooter wheels from now 'til eternity.

We also had a tiny bit of latency on our vision setup, which I also believe would have thrown off our turret (or at least made it slower) if we put the camera on our turret. The advantage of putting the camera in a fixed location is that while your robot is close to stationary, you’re basically guaranteed very accurate vision results without worrying about latency, and it’s also easier to localize your odometry if you’re doing that as well. This is all doable by putting the camera on the turret and adding an encoder, but I think it’s easier and usually more stable to do in a fixed location.

If you can try to put a camera on your turret and it’s not unstable, go for it. If it’s unstable and you can convince the mechanical guys it’s worth it to try and balance your flywheel, then maybe you could put it on turret. For our team, I don’t think it would have been worth it to try and balance our flywheel because of the time it would have taken away from other things.

Does TalonSRX.getActiveTrajectoryVelocity() return the current target velocity of the motion magic control the talon is running? Could I pass this in to a wpi SimpleMotorFeedForward.calculate and give the new value back to the talon as arbitrary feedforward?

1 Like

As far as I can tell.

Do you mean pass the return value from getActiveTrajectoryVelocity() directly into the calculate method as a velocity?

In motion magic, the PID values are used for the current trajectory target position (with this corresponding getter), and the F gain is multiplied by the active trajectory velocity. Just load the F gain into the controller instead of manually pulling the current target, calculating an output, and refeeding the controller.

Also, it would be important to note that the velocity from the controller is in ticks/100ms and the arbitrary feedforward unit is [-1 to +1], while the simple motor feedforward accepts a velocity and returns volts. All convertable units, but conversions you would still have to make.

And as a thread wide reply, I believe the general concept of coding a turret should look like this.
output = PD(target_angle) + feedforward_for_drivebase_angular_velocity + feedforward_for_drivebase_to_target_velocity

Given that the target_angle is produced via vision or odometry and feedforwards are updated dynamically based on how fast the drive base is turning (PID shouldn’t have to pick up the slack when drive base turns under the turret), and the required speed for the turret when the robot is driving by the target. Extra internet points (and maybe in game points) if you can lead your target_angle based on the above values to shoot while moving.

254 in 2019 accounted for the first two feedforwards, and referenced the third in this post.

I’ve been toying around with a turret in sim, and I don’t yet see the advantage of trapezoidal motion profile + arb_ff (for steady state velocity) over pure position PID+arb_ff (for steady state velocity). One thing I’d like to try is a motion profile with nonzero end velocity (must be done RIO side).

1 Like

Thanks for the detailed reply!

So this gain would be the kV value from the SimpleMotorFeedForward right? Can I use a value I get from frc-characterization here or will I need to find this value manually?

I think I understand how to find the drivebase angular velocity, but could you explain what is going on with the DB to target velocity? Then to calculate the arbitrary feed forward value to account for these, I would do a bunch of unit conversions but somewhere use kV(?) again to go from velocity to volts right?

With all of these feed forwards and things, do you recommend using a soft limit to protect the wiring, trying to limit the set point and feedforward to keep them within safe values, or both?

The advantage for my team would be simply learning how to use a trapezoidal motion profile. :wink:

1 Like

As you drive past a target, its relative to drivebase angle changes so you have to account for that as well as the drive base itself turning.


The SimpleMotorFeedforward only holds gains, so I’d be hesitant to agree with that given the wording “from”.

The gains from the frc-characterization tool are dependent on what units you enter to it. You’ll need to convert those gains to the controllers units. For example, a volts/degree/sec
for a turret or volts/meter/sec for a drive train would need to be converted to output units[-1023 to 1023]/ticks/100ms for a Talon.
(I believe units for Spark Max’s are percent[-1 to 1]/rotation/minute, but can’t find a source for that)

Edit: I’ll mention that Kv is simple enough to calculate, and is a good starting point for tuning. 971 put out a video about system modeling where I believe an explanation of how to calculate Kv is included.

@Amicus1 got it right. The link in my previous post with 254’s code shows the tangential component.


Definitely both.

There’s full examples here and tutorials here for trapezoidal motion profiles from WPI.

Thank you so much for all your help! I can’t wait to get out of quarantine and see if my team and I can get an awesome turret up and running. I have a feeling it will be useful next season.

1 Like

If this is your first turret, I think you don’t need nearly this complex of a solution. Like Adam said, all you really need to do is enable Motion Magic on the SRX, feed the ty value from the limelight as the current position, 0 as the setpoint, andtune the P or P/D gains. No need to feedfoward anything from the drivetrain.

Once you have this working, there are alternative things you can do to improve this, but it’s really just incremental improvements from there.

1 Like

Maybe I’m overly ambitious because this would be our first turret, but I think we would start with a generic pid loop to get used to how a turret in general works, and then I hope we will have time to get some of these more advanced features working. I think we can get motion magic, and drivetrain feedforward working before 2021 kickoff if we start small and build up. Of course I’m also hoping we can get motion magic running on an elevator, learn enough about trajectory following to be competitive, and as part of that try to implement a limelight powered vision align that uses live trajectory generation to align with a vision target, all for the first time. Maybe I am a little ambitious. :wink:


Or there’s the MORE POWER approach. Target first, then spin up your shooter. Having 3 motors powering our shooter made it power up in a jiffy.

All of this sounds great, just keep the strategic design elements in mind. Specifically, make sure you prioritize all of these tasks. Consider both how much effort they will take, how much of an on-the-field performance difference it makes, and how they can build on each other.

For example, what’s the real-world effect of adding drivetrain feedforwards to adjust your turret output? This compensates for your drivetrain moving while you’re aiming. Strategically, do you for-see the need for aiming on-the-fly, or do you plan on mostly shooting from safe zones? If you’re shooting from safe zones, how much does the increased settling time from simple closed loop control vs following a motion profile? How do these choice impact the rest of your gameplay strategy? This especially ties into whether your robot is short or tall, where you plan on acquiring game pieces from, etc.

My point is, you’ll see a lot of cool technical things that people are doing across FIRST. I think equally important is understanding what the effort vs benefit ratio of these tasks are. Say you’re a 50th percentile robot and 254 is a 99th percentile robot. A lot of the things they do are motivated by chasing that last 1%. Don’t chase the 1% without doing the other 49% first.