Problems with Automatic Target Alignment

Hey everyone,
my team has been working on vision processing code in order to align the robot to the target automatically during auto and teleop.
From the vision code we get a number ranging from -1 to 1 where 0 is the center of the image, using this number we have tried to apply some sort of PID with an additional force to overcome static friction, but we couldn’t make it work.
The force sent to the motor was either to weak which didn’t move the robot or too strong making it oscillate uncontrollably around the center.

So I have 2 questions:

  1. Is there a good method to overcome this problem according to your past experience?
  2. After calibrating the code to work at home, what is the best way to make sure it will work on the actual filed? (With a brand new carpet, etc.)

Thank you!

PID uses the I term to push harder if the mechanism doesn’t move right away. Set the P constant just low enough to avoid oscillation, then crank up the I until it moves reliably. Use the D term to compensate for rotational momentum so you won’t keep accelerating once it gets moving “fast enough” and you’ll give the robot a chance to slow down as it approaches the target.

You might have to do some back and forth adjustment of P and I, as they’re both contributing to control in the same direction, and once the robot starts moving they’re reinforcing each other.

Watch out for how you’re closing the feedback loop, though. You can’t just use the vision target output as the feedback signal, because it will be delayed enough to make your robot turning oscillate even with a well-tuned PID controller. Instead, use a gyro to measure your current heading and use the vision system to decide what your desired heading will be. Only take a new vision reading once the robot direction has settled down, so you know you’re getting a value that reflects “now” instead of one that was derived while the robot was pointing somewhere else.

I would agree with Alan Anderson, I would make sure that the P term is large enough to avoid friction issues while using D to stop oscillation. The I term is not strictly necessary, it can be left out or replaced with a 2nd Derivative term.

Secondly, make sure that your code does the conversion from location on the image to angle if your PID code is based on angle, you will have to do some trigonometry.

If you encounter difficulties with gyro drift, magnetometers can avoid some of those issues in exchange for accuracy.

Finally, try to get a carpet that is similar to the actual field environment, and competitions will usually provide some time for calibration at the beginning, take advantage to tweak your constants.

The other thing to consider is how you are telling it where to stop, you will never be able to make it stop dead center every time. You need a range that is acceptable for it to stop within, then move the robot slowly enough that the camera’s refresh rate can keep up and it will catch the range. If you want to be more precise without taking more time you can use multiple “stages” where you have it move faster until it is within a wider range, then it slows down to catch a smaller range.
We are running PID control as well, but we run it separate from everything else so we can use it with any movements we make.
Our auton uses a “2 stage” vision targeting system that has been 100% reliable.