Turret - Auto aim at target at all times

Hello, our team has been discussing the best way to have our turret always aimed in the general direction of the goal. We have developed a list of potential solutions and was wondering if there’s a proven method that works better than the others.

  • Gyro on turret - Use the X,Y position of the robot and the X,Y position of the goal and calculate the angle to target. Once we are ready to shoot, use the limelight to lock onto the goal.
  • Potentiometer on turret spinning motor output - Calculate the angle to the target based on X,Y positions of the robot and goal, then turn to specific angle using potentiometer value to angle ratio. You would need to account for the chassis turning which would skew the angle that the potentiometer assumes we are at. Once we are ready to shoot, use the limelight to lock onto the goal.
  • Use only the Limelight and operator - Have one of our drivers manually keep the turret pointing in the general direction then use the Limelight to lock on once ready to shoot.

Are other teams planning on doing something like this? Please let us know what your opinions are.

Thanks and good luck this year!

1 Like

Gyro’s drift, remember that. Not insurmountable, but it is an issue to consider when you start talking about any sort of field-centric control.

You have several potential issues when trying to automate this type of control. First, there’s a question of speed. Which turns faster, your robot or your turret? It may prove difficult to keep up with the driving motions! Second, there’s the service loop - what kind of rotation is your turret going to have? Unless it’s infinite, you need to account for the possibility that you’ll get to one extreme and need to keep going but can’t - how do you get back around to the target in an automated way? Finally, there’s the question of power drain… a constantly moving turret is drawing down your battery… and most of that motion is going to be wasted. Only moving the turret near the end of your cycle, when you’re getting close to your shooting point, would save battery power.


Thanks for replying! We will consider these issues while making our decision.

I guess it’s possible teams don’t do the auto aiming at all times because of the complexity or because it’s not necessarily needed.

Do most teams just use a mix of driver control + vision processing for turret aiming?

1 Like

We plan to do gyro on robot and accumulate position frequently to maintain an absolute (x, y) position, then when we see a vision target, we will reset the gyro and position.

We will then set our turret’s desired rotation to point towards the power port target based on the position and orientation of the robot. The turret will have a high resolution encoder on it. This has the advantage to work if we lose vision for a few seconds, but if we go without vision for long periods of time, the x, y position will likely become off.

So basically, vision and absolute position to aim our turret.

So, I am struggling a bit with the choices you have listed. In all cases, you seem to imply that you know the X,Y location of the robot on the field, so presumably, you knew the starting location and you have good odometry (which would imply that you probably have a gyro on the chassis). If you know the X,Y position of the robot on the field and presumably the orientation of the robot (since you would need to know that to keep track of the X,Y position), then you can calculate the angle that the turret would need to be rotated to in order to be pointed toward the goal. If these assumptions are not true, than neither of your first two examples will work since you don’t have the necessary inputs for either scheme.

I think what you are asking with your examples is what is the best way to determine the angle that the turret is pointing, to use a gyro on the turret itself or to use a encoder to keep track of the turret rotation angle or just have the drivers get close by visual feedback.

If I was trying to keep track of the turret position relative to the robot, I would use an encoder. For one thing, the encoder can be on the turret turning motor, so the signal from the encoder can be routed to the control system without the need to worry about running through your long flexible wire snake that is needed to provide wires to the bits that are mounted to the turret itself. The encoder will also be able to keep track of how close you are to the “end points” of the rotation and prevent you from over-rotating the turret and straining your wires.

But, you also missed one option which is to let the limelight keep track of the target and send the commands to the control system to turn the turret to keep it aligned with the target. In this case, you don’t need to know the X,Y of the robot at all. You just need to be able to see the target. The FOV of the camera is pretty wide so you have a pretty wide angle range where you can see the target. If the target ever does fall outside of the FOV, you could put in a “seek” mode where it would slew the turret from one end of the travel to the other until it found the target. Of course it could find the opposing alliances target, in which case, the driver would need to tell the turret to look the other way for the correct target. I seem to recall that several teams in 2016 had this type of system where they would seek the target using the vision system only.

Good luck!


We are using a Limelight that stays locked onto the target all the time. As we drive it rotates to maintain line of site to the target. A secondary driver is available to correct if necessary. Should we rotate too much, the turret spins around to lock back onto the target as well so our wiring for the camera doesn’t get twisted or tangled.


Are you concerned that you may be required to turn off limelight lights because of it being so bright? We were thinking about this method, but have decided to only use limelight immediately prior to shooting.

1 Like

Yes, we do know our X,Y position.

This certainly seems easier than the other methods. We’ll take a look at this as well!

Would you mind explaining how you know this. Teams have been trying to get absolute position localization down for over a decade and, while there have been some promising developments, there’s nothing I know of that works reliably for the entire match. How have you solved this problem?


Currently we are tracking our odometry with help from some of the new WPILib classes. Our team has not run an entire match to see if our X,Y position remains accurate, but we have done some shorter tests and it seems solid (we’ll have to test more). It’s possible that the gyro drift makes it more and more inaccurate as the match goes on to a point where the X,Y position is not usable.

Here is this code:

If using the WPILib odometry classes proves to be inaccurate over time, our team did develop a solution that determines X,Y position over the summer. I’m assuming our code does a lot of the same things as the WPILib odometry classes and it may be susceptible to the same problems over time, but it seemed solid during our tests.

One of our programming students wrote the code, so I won’t be able to describe exactly how it determines the X,Y position, but here is the code. EDIT: I wanted to add that the code linked below does a lot more than just determining the X,Y position - sorry if it’s messy/hard to read!

From reading the replies in this thread, it sounds like we need to do more testing to determine that the position is accurate over a longer period of time! I will reply to this thread once we have done more testing!

Your odometry will drift to >5 inches of error after about 15 feet of distance travelled on a flat field. This was our experience with odometry last season using a nonlinear state estimator very similar to the one added to wpilib. You might get a little more or a little less range depending on how much your drivetrain scrubs and whether or not you characterize track width and wheel diameter instead of measuring, but things will not be accurate enough, especially if you drive over the field generator bumps.

Your best bet is retroreflective tape vision. It’s what the target is there for, and implementing it is trivial if you use a prepackaged solution like Chameleon vision.

1 Like

I believe there is a rule about Limelight’s only being allowed for short bursts while shooting. You will not be allowed to keep them running for long periods due to blinding other competitors. See R8 and the blue box example m. This sounds like high intensity light sources can be interpreted as Lime lights.

The parenthetical in the blue box says “military grade or self defense” I do not believe the limelight LEDs qualify as that level of intensity. Those types of light sources are usually marked as dangerous and the limelights are not marked that way (as far as I know).

Someone who plans to use a limelight might want to ask in the Q&A.

Our vision system last year used the green LED rings from Andymark. We had them turned on full time during matches. I’m not sure if we are planning to use the same thing this year or not for our vision system, but we may ask about those in the Q&A to see if they are legal.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.