![]() |
Full State Feedback Question
I had a question for some of the teams who have implemented full-state feedback for their robots:
How do you use integral control? The standard way of achieving integral control that I've seen commonly used in practice is to add an additional state that's the integral of error of the variable you want to have zero steady-state error. There are two drawbacks to this method: -there's integral windup on a unit step -previously, if I had a position/velocity trajectory, I could calculate what I wanted Kx to be, and used this as my reference input. As a result of using both desired position and velocity, I got really good reference tracking. With the integral control added, my reference is always the desired position, and I have no way to tell the controller my desired velocity. Is there an easy way to be able to specify all the desired states and have the integral control like I described? Is there a better way to add integral? |
Re: Full State Feedback Question
What does your state space look like?
Problem #1 (integral windup on a unit step) is a fundamental limitation of linear feedback control; you can implement some sort of anti-windup mechanism if you want (as you would with PID) but it will require additional logic and make your closed-loop system nonlinear. A common way to do this is by preprocessing your input (e.g. calculating a new instantaneous goal that will saturate your output but won't result in excessive windup). For problem #2, if you formulate your state space and controller correctly, you should see the velocity state of your output behave as desired (matching the velocity reference if there is no position error, and adjusting +/- depending on position tracking). However, this requires designing a MIMO controller, so you need to use a method suitable for doing so, since you need to choose a K matrix that's makes tradeoffs between multiple objectives (LQR is one common approach). As a quick workaround, you can use apply separate position and velocity FSF controllers and sum their outputs. |
Re: Full State Feedback Question
Quote:
I've used 3 ways to do integral control over the years. 1) Augment the plant as you described. For an arm, you would add an "integral of position" state which would do that. 2) Add an integrator to the output of your controller, and then estimate the control effort being applied. I've called this Delta U control before. The up-side is that it doesn't have the windup issue you described above. The downside is that it is very confusing to work with. 3) Estimate a "voltage error" in your observer and compensate for it. This quantity is the difference between what you applied and what was observed to happen. To use it, you literally add it to your control output, and it'll converge. I'm currently using this as my primary method. The math to do that is as follows: dx/dt = A x(t) + B (u(t) + u_error) dx/dt = A x(t) + B u_error + B u(t) We can then augment x to be -> x' = [x.T, u_error].T This then can be rephrased as: dx'/dt = [A, B; 0, 0] x(t) + [B; 0] u(t) This then sets your observer up to estimate both your state and the u_error term (assuming a SISO, but you can generalize to N dimensions if you need to). I like to work out the physics in continuous time and then convert to discrete time. You can then augment your controller similarly. u = K (R - X) - u_error This can be re-written to be u = [K, 1] * (R' - X') where R is augmented with a 0 for the goal u_error term. We've now done 3 controllers this way this year (drivetrain, intake and superstructure), and the results are pretty impressive. I was able to grab our drive base in heading control mode and pull on a corner. The robot held the heading very well while sliding sideway. |
Re: Full State Feedback Question
Quote:
U = K (R - X) + FF(R) |
Re: Full State Feedback Question
I think I have things working now with the first method Austin described, but I plan to try the third method when I have a change.
I have tried three controllers: 1). No integral control, plain fsfb. My states are position, velocity, acceleration. I need the additional acceleration state - the time constant of my motor is significant for the application. I calculate my reference with K(1)*position_goal + K(2)*velocity_goal, where K is my vector of gains. This gives me the following: http://i.imgur.com/wJFlhnF.png which is exactly as Jared described - the velocity tracks too, but will deviate to get position where it needs to be when it can't follow the trajectory exactly. Note that if I don't add the K(2)*velocity_goal term to my reference, the tracking lags behind and doesn't reach 1 exactly. 2). Integral control that Austin describes in 1). http://i.imgur.com/rp53mAd.png This works much better if I add an extra load torque than the previous controller, but doesn't track as well. My reference is simply my desired position and u = -(K*x + K_i*x_i) where K is my vector of gains (without the integrator gain), x is the state vector (without the integrator state), and K_i is the integrator gain, and x_i the integrator state. 3). Integral control like 2), but with a different u u = -(K*(x - x_d) + K_i*x_i); where x_d is a vector of desired states, not including integrator, which gives me this: http://i.imgur.com/tf0TdcT.png For now, I've been able to get away without an observer and just do a bit of filtering on my velocity. It probably helps that I'm sampling at several kHz. |
Re: Full State Feedback Question
Quote:
For a controller without integral or of type 2) or 3), you need a feed-forwards term . You can see it in your third plot. There is a small steady state error (though much smaller than your first plot). We can show this with the following math. Suppose we are moving at some velocity Vel. From the motor equations, we know that under 0 torque, this takes volts =Vel / Kv to go that fast. Vel / Kv = K (R - X) If we are tracking perfectly, R - X = 0. The only velocity this holds for is 0 velocity. Therefore, we always need feed-forwards with a DeltaU controller or a controller which estimates the disturbance voltage. A controller of type 1) won't have this problem, but will have a time constant when the reference trajectory accelerates/decelerates. Take a look at our intake code this year for an example of a controller of type 3). //y2016/control_loops/python/intake.py You can play with it yourself if you want. |
| All times are GMT -5. The time now is 02:36. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi