I wrote a speed PID controller for the drivetrain of a Vex robot. Feedback comes from an encoder mounted to the motor. The PID controller works pretty well at holding speed. However, the encoder has fairly low resolution and the speed measurement/ PID control loops run at a high frequency of 50Hz. Therefore, the resolution of speed measurements is limited to about 2 in / s. I’m wondering if any techniques, such as applying a moving average to the inputs to the PID controller, could improve the accuracy of the PID controller without decreasing measurement frequency. Another possibility, as the PID controller and the speed measurement are separate tasks, is to run speed measurements at a lower frequency than the PID. This approach could also cause problems, as feeding older data to the PID controller would probably compromise its accuracy.
The first thing I would explore is ways to get a higher resolution speed signal. I am not all that familiar with the Vex controller so I can’t speak to the ease of the following approach, but one way to solve this problem in general would be to measure the time between successive encoder pulses rather than counting pulses and dividing by unit time. Most timers have resolutions that far exceed the resolution that you are showing here.
Problem with measuring elapsed time between consecutive encoder pulses is: manufacturing tolerances of the spacing of pulses on the encoder wheel can result in an excessively noisy signal if you measure only one pulse.
I have found that the most common source of this problem with encoders is that the A and B lines are not exactly 90 degrees out of phase and I am trying to measure the time between two consecutive transitions in “x4” mode (rising A -> falling B for example). I have had good luck with measuring successive transitions in “x1” mode (rising A -> rising A) because the disks themselves are usually pretty well made(at least in the case of US Digital S series and Grayhill 61/63 series optical encoders).
Worst case you can average the last N measurements and be able to trade off phase delay with smoothness.
Yes, the cross-channel tolerance seems to be larger than the same-channel tolerance.
Worst case you can average the last N measurements and be able to trade off phase delay with smoothness.
It would be nice if the FPGA would do this for you, and allow you to specify a FIR boxcar average of the last N pulses. Or if that’s too much to ask, perhaps a simple low-pass IIR filter.
Otherwise, you’re stuck with grabbing N samples each 20ms (or whatever) apart.
Hey, Cal! There are several things you could try, including a moving average (which I did for my report). Lowering the update rate is another option. If you wanted to do something fun and educational, you could also research Kalman filters and implement one. They’re a bit heavier computationally but it would be a great exercise!
This is for drivetrain control, although if it is very successful, I may use it for other functions.
I’ll work on this over the next few days…
P.S: Kevin: Please upload your report to CD Media.
Edit: I took the average of all the speeds after the acceleration finished. It came out very close to the setpoint. I’m not sure whether it is necessary to further refine the PID. Time to see if I can write motion profiling code to go with it…
An analogy I have for that is Euler’s and Newton’s Methods. You’ve learned in class that these are recursive; the input of the next “iteration” is the output of the current one. That’s how the Kalman Filter is; it’s recursive. How I would describe it is that it’s a “weighted” average between the previous out put and the new data gathered from sensors.
“A priori state” is the state before the measurement has taken place; “a posteriori state” is after the measurement. So, the a priori state is the a posteriori state from the previous iteration and the a posteriori state of the current iteration is the latest “prediction”. That prediction, you feed to your PID controller.
How you weigh the average is based on the error of the system; whether your predictions are more accurate than the measurement from sensors determine the weighing factor. Your initial prediction comes from another sensor or input; how I did it was just use the relation between the PWM signal fed into the speed controller VS real RPM. Now, again, I probably butchered that whole idea of the filter, but it worked.
Now, I do not declare I know much about the Kalman filter; I only implemented a rudimentary Kalman filter on the robot. I probably butchered my description, but I’m just trying to help. My implementation at least cleaned up the input significantly.
Read up on papers on the Kalman filter by searching through Google Scholars; it helped tremendously over the wikipedia page.
Gear the encoder so that it turns more per revolution of your drive wheels. You’ll get more counts per 50Hz loop, which will give you more resolution to work with.
Yes, you can try and overcome the lack or resolution to a point with software filters, but there comes a point where you can only do so much. There’s a great adage when it comes to sensor data: garbage in, garbage out.
How much resolution are you trying to achieve here?
Are you using two wire encoders, or one wire? PIC or Cortex?
I’m using the Cortex and the integrated motor encoders on 393 motors internally geared for higher speed. Ideally, I would like resolution below 1 inch per second.
However, I added a weighted moving average filter to the inputs to the PID controller and it seems to make it behave much better.