View Single Post
  #14   Spotlight this post!  
Unread 01-06-2014, 03:11
AustinSchuh AustinSchuh is offline
Registered User
FRC #0971 (Spartan Robotics) #254 (The Cheesy Poofs)
Team Role: Engineer
 
Join Date: Feb 2005
Rookie Year: 1999
Location: Los Altos, CA
Posts: 803
AustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond reputeAustinSchuh has a reputation beyond repute
Re: 971's Control System

Quote:
Originally Posted by tStano View Post
This sounds really cool, and it is very interesting to me, but due to the small issue of me still being in high school, I am very confused on many things. I tried wikipedia, but its not being of a whole lot of help. I'm not attacking you for being too technical, I just would like to understand. So, heres some really dumb questions.
I'm a big fan of bringing people's knowledge up, rather than dumbing what we do down. We on 971 prefer to teach students how things really work rather than simplify them so they are simple enough that they are easy to learn.

Quote:
Originally Posted by tStano View Post
What is state control, and how does it replace PID? The wiki said that it involves a thing with a small number of states. I don't understand how "position" and its derivatives can be "states". It seems to me that each one of those could have near infinite states.
I like to think about the state as the minimum set of variables used to describe what the plant (thing we are controlling, comes from Chemical Plant) is doing. For something like a simple DC motor connected to a flywheel, this is the position and velocity of the flywheel. For something like our drivetrain this year, that would be the distance traveled and velocity of the left and right wheels.

Since robots work in discrete time, I like to do all my controls math in discrete time. I think that makes it easier to explain.

Lets define a simple system as follows.

x(n + 1) = a x(n) + u(n)

Lets let x(0) = 1, and u(n) = 0, and look at how the system responds.

x(1) = a * x(0)
x(2) = a * x(1) = a * a * x(0)
x(3) = a^3 x(0)
x(n) = a^n x(0)

We can notice something interesting here. If |a| < 1, the system converges to 0 and is stable.

What if a = 2, and we can define u(n) = f(x(n))? For a CCDE (constant coefficient difference equation), LTI (defined below) means that the coefficients are constant. Lets let u(n) = -k * x(n)

x(n + 1) = a x(n) - k * x(n) = (a - k) x(n)

Given our knowledge above, we can compute the set of all k's that make our system stable. k is in (1, 3).

Since life is always more fun when linear algebra shows up, lets let X be a matrix instead of just a value.

X(n + 1) = A * X(n) + B * U(n)

If we do the same trick as for the scalar above, we get the same result. This means that we care that A^n decays to 0 as n -> inf. If we diagonalize the matrix, we can rewrite it a (P^-1 D P)^n -> P^-1 D^n P. Since D is diagonal, D^n is just the diagonal terms ^n. This means that our system is stable if all the elements on the diagonal have a magnitude < 1. Turns out these values have a name! They are the eigenvalues of A. Therefore, we can say that if the eigenvalues of A are inside the unit circle, the system is stable.

Lets try designing a controller.

U(n) = K * (R(n) - X(n))

R(n) is our goal.

X(n + 1) = A X(n) + B K (R(n) - X(n))
X(n + 1) = (A - BK) X(n) + B K R(n)

So, we can use fun math to design a K such that the eigenvalues of A - BK are where we want them, and the system responds like we want it to. This is pretty awesome. We can finally model what our control loop is doing.

Unfortunately, as Kevin was talking about above, this assumes that we know the state of our system. X(n) is the state of our system at t=n timesteps.

Ah, but you say, we can determine the velocity by taking the change in position over the last cycle, and divide that by the time! Unfortunately, that will instead compute the average velocity over the last cycle, not the current velocity. When you are moving really fast, that delay actually is a big deal. We can define another controller who's whole job is to estimate the internal state.

Let me introduce another equation and variable. Let Y be the measurable output.

Y(n) = C X(n) + D U(n)

For most robot systems, D is 0, but I like to include it for completeness.

Lets define Xhat to be our estimate of the state.

Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - Yhat(n))
Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n))

Lets try to prove that the observer converges, i.e. X(n) - Xhat(n) -> 0.

X(n + 1) - Xhat(n + 1) = A X(n) + B U(n) - (A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n)))
E(n + 1) = A (X(n) - Xhat(n)) + L C Xhat(n) + L D U(n) - L C X(n) - L D U(n)
E(n + 1) = A (X(n) - Xhat(n)) - L C (X(n) - Xhat(n))
E(n + 1) = (A - LC) E(n)

Yay! This means that if the eigenvalues of A - LC are inside the unit circle, our observer converges on the actual internal system state. In practice, what this means is that we have another knob to tune. You can think of this as 2 steps. Xhat(n + 1) = A Xhat(n) + B U(n) is our predict step. Notice that this uses our model, and updates the estimate given the current applied power. L (Y(n) - Yhat(n)) is our correct step. It corrects for an error between our estimate and what we just measured. If we set the eigenvalues of A - LC to be really fast (close to 0), we are trusting our measurement to be noise free, and if we set them to be slow, we are trusting our model to be accurate.

If you have a nth order CCDE, or nth order differential equation, you can instead write it as N coupled first order equations. This property lets us express an arbitrarily complex system in the form shown above. I also like to design all our systems in continuous time and then use a function that I wrote to convert the nice clean continuous time model to the discrete time version used above. Robots compute things at discrete times, and it isn't quite right to pretend that everything is really continuous and to use the continuous equations instead.

Quote:
Originally Posted by tStano View Post
Does your assumption of an ideal motor cause problems? Also, can I get an explanation of that math in more layman's terms? I haven't taken calc yet, but I have a basic idea of what like integrals and derivatives are.

Thank you so much! Or if its too much trouble, let me know, and I'll do some more research.
You are going to have to do a bit more research, since it does take a couple college courses to cover all of this stuff properly, but I'll happily give you enough info to help make the information that you will find out there make more sense. Controls uses a lot of calculus.

For the most part, it doesn't cause many problems. Most of the non-ideal behavior on a motor in FRC is no longer worth worrying about. With the old Victors, 50% power would result in close to full speed, and that caused me all sorts of problems.

Controls guys love to throw around the term Linear Time Invariant (LTI). LTI means that the system is "linear" and "time invariant". Time invariant is easy to understand. If I apply u(t) to my system starting at t=1, I get the same response as if I apply u(t) to my system starting at t=2, all assuming that the system is in the same initial state at the start time. Linear means that the following holds. F(2 * u(t)) = 2 * F(u(t)), and F(u1(t) + u2(t)) = F(u1(t)) + F(u2(t)). In practice, this means that your system can be defined as a set of ordinary differential equations.

When I design a control system, I try very very hard to avoid doing non-LTI things. I don't like doing things like writing U(t) = x^2, since that isn't linear. (For fun, use my definitions above to show that that isn't LTI ) I don't like dealing with integral windup, since it is very hard to properly model the solutions, since they aren't LTI.

For a DC motor, I like to use the following definition. My Kt and Kv may be inverted from what people typically use, but the math still works... You can think of a DC motor as a generator in series with a resistor. The torque is generated by the magnetic field in the generator, which is proportional to the current through the coils. The BEMF voltage is the voltage that the generator is generating. Lets put some math to this.

V is the voltage applied to the motor. I is the current that goes through the motor. w is the angular velocity of the motor. J is the moment of inertia of the motor and it's load. dw/dt is the derivative (rate of change) of the angular velocity of the motor, which is the angular acceleration of the motor.

V = I * R + Kv * w
torque = Kt * I

V = torque / Kt * R + Kv * w
torque = J * dw/dt

dw/dt = (V - Kv * w) * Kt / (R * J)

You probably won't recognize this, but this is a classic exponential decay. If we are modeling the system as only having velocity and ignoring position, it has 1 derivative, and is therefore a first order system. This means that it has 1 state.

Depending on how critical the response is, we'll either pull the mass/moment of inertia out of CAD/guess, or we'll command the real robot to follow a step input (t > 0 ? 12.0 : 0.0) and record the response. We then pass that same input into our model, and plot that next to the real robot response. We then tweak the coefficients until the responses match pretty well.

I like to bring up a new system by limiting the applied power to something that I can overpower with my hands (+- 2 volts was our claw limit this year), and then I hold on and feel the response. This makes it easy to check for sign errors and to feel if the response is smooth and stiff or jittery without the robot thrashing or crashing itself. After it passes that check, I'll let it run free and slowly give it more and more power. Most of 971's systems can crash themselves faster than we could disable them, and even if we could disable one, it would still coast into the limit pretty hard.

I'd recommend that you pull down our 2013 code (or 254's code) and look at the python used to design the control loops. I'd recommend taking a look at the indexer code from 971's last year's code as a simple state feedback loop. Play with it and learn how it works. Then, try modeling something that was on your robot this year, and stabilize that.
Reply With Quote