Quote:
Originally Posted by Jared
How do you use integral control?
The standard way of achieving integral control that I've seen commonly used in practice is to add an additional state that's the integral of error of the variable you want to have zero steady-state error.
There are two drawbacks to this method:
-there's integral windup on a unit step
-previously, if I had a position/velocity trajectory, I could calculate what I wanted Kx to be, and used this as my reference input. As a result of using both desired position and velocity, I got really good reference tracking. With the integral control added, my reference is always the desired position, and I have no way to tell the controller my desired velocity.
Is there an easy way to be able to specify all the desired states and have the integral control like I described?
Is there a better way to add integral?
|
To start with, I'll first caution you to actually see if you need integral first. You can get pretty close without integral control, and integral adds all the issues that you've listed above. Below, I'm going to assume that you've made that assessment and integral *is* going to be worth the pain.
I've used 3 ways to do integral control over the years.
1) Augment the plant as you described. For an arm, you would add an "integral of position" state which would do that.
2) Add an integrator to the output of your controller, and then estimate the control effort being applied. I've called this Delta U control before. The up-side is that it doesn't have the windup issue you described above. The downside is that it is very confusing to work with.
3) Estimate a "voltage error" in your observer and compensate for it. This quantity is the difference between what you applied and what was observed to happen. To use it, you literally add it to your control output, and it'll converge. I'm currently using this as my primary method. The math to do that is as follows:
dx/dt = A x(t) + B (u(t) + u_error)
dx/dt = A x(t) + B u_error + B u(t)
We can then augment x to be -> x' = [x.T, u_error].T This then can be rephrased as:
dx'/dt = [A, B; 0, 0] x(t) + [B; 0] u(t)
This then sets your observer up to estimate both your state and the u_error term (assuming a SISO, but you can generalize to N dimensions if you need to). I like to work out the physics in continuous time and then convert to discrete time.
You can then augment your controller similarly.
u = K (R - X) - u_error
This can be re-written to be u = [K, 1] * (R' - X') where R is augmented with a 0 for the goal u_error term.
We've now done 3 controllers this way this year (drivetrain, intake and superstructure), and the results are pretty impressive. I was able to grab our drive base in heading control mode and pull on a corner. The robot held the heading very well while sliding sideway.