View Single Post
  #1   Spotlight this post!  
Unread 02-06-2014, 16:12
NotInControl NotInControl is offline
Controls Engineer
AKA: Kevin
FRC #2168 (Aluminum Falcons)
Team Role: Engineer
 
Join Date: Oct 2011
Rookie Year: 2004
Location: Groton, CT
Posts: 261
NotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond reputeNotInControl has a reputation beyond repute
Re: 971's Control System

Quote:
Originally Posted by tStano View Post
This sounds really cool, and it is very interesting to me, but due to the small issue of me still being in high school, I am very confused on many things. I tried wikipedia, but its not being of a whole lot of help. I'm not attacking you for being too technical, I just would like to understand. So, heres some really dumb questions.
I’ll try to help explain some of the underlying principles around control systems, so that you can better understand the conversation, and the research you come up with. I spent last year teaching Undergraduate Feedback Controls at NYU, so being in high school doesn’t mean you have any less ability to learn this stuff.

Quote:
Originally Posted by tStano View Post
What is state control, and how does it replace PID? The wiki said that it involves a thing with a small number of states. I don't understand how "position" and its derivatives can be "states". It seems to me that each one of those could have near infinite states.
Before we jump into this stuff, let’s get some terminology down. To aid in your understanding, it is helpful to understand what we are talking about and why. When a control engineer sets off to design a controller for a new system, first thing they do is create a model of the system. Similar to the way mechanical designers create CAD models of physical items to have a virtual representation of an object that they can test, manipulate, etc. control guys do something very similar. It would be very expensive and time consuming to try and design a controller and continue to test it on the actual hardware, sometimes the hardware is not built or ready yet. The models we create are called dynamic models, and they use the laws of physics to mathematically represent the system we are trying to control. We refer to this model as the “plant” model. The plant is the system we are trying to control. This can be a DC Motor and Gearbox, it can be a robotic manipulator, it can be a circuit, or even in the medical field, it can be the reaction time of a pill. The model accounts for as much of the dynamics as possible of the real physical system and we can use this model to then test various types of control algorithms before we decided the best design.

For Robotics, we are mainly interested in modeling and controlling dynamical systems. The term dynamical can be thought to mean a system which does not change instantaneously when acted on. For example, a ball (the system) when kicked (the action) doesn’t instantaneously move to the new location, it takes some time to move. The time it takes, how it moves (through the air, rolling on the ground, etc.) and the distance it moves is all considered the dynamics.

When we mathematically model a “plant” to create the “plant model”, we mathematically describe the motion of that system. Lets work with an example to bring these ideas to life. Let's say we want to control the position of a projectile. A simple model of a projectile would be that of a projection in two dimensions. If you remember from physics, the x and y distance of a projectile in 2 dimensions can be described as (assuming acceleration is constant):

X_final = x_inititial + v_initial*t*cos(theta)
Y_final = y_initial + v_initial*t*sin(theta) - ½ gt^2

Where theta is the launch angle, x_initial is the initial x displacement, y_initial is the initial y displacement, and V_initial is the initial velocity.

These equations tell you everything about the motion of the projectile (neglecting air resistance). If you know the following parameters, such as starting x, y, theta, and velocity at time = 0, you can determine the position of the projectile for any future time t>0.

In order to solve the above, we need to solve both of the equations simultaneously. The above is called a system of equations and all dynamical systems can be modeled using a set of equations like the above.

This leads us to the definition of “state” in control systems. The “state” of a dynamical system is the collection of all variables that completely characterize the motion of the system to predict future motion. This is very important in control systems because what is means is, what you know the state of a system, you can completely describe all of its behavior, and if you want to control it, understanding how it behaves is a must. Every model, there is a set of variables, which once you know all of it, you can completely mathematically predict the behavior of the system. These variables have various categories, inputs (which you use to drive the system), initial conditions(which determine the resting state of the system at t=0), outputs(response of the system based on the system dyanmics).

From our example, the position of the ball (x, and y location) and the velocity of the ball, is all that we need to plug into the dynamic equations to solve it. In this example, position, and velocity are called the state variables, because thats all we need. (Theta is apart of velocity because velocity is a vector which has both magnitue and direction. The magnitue is the velocity number in m/s, the direction is theta.)

For our example, you can assume velocity is an input, the initial x, and y positions are initial conditions, and the final X and Y locations are the outputs.

Depending on what you are trying to control or solve, the state variables can be different.

When we say “state space” that means the set of all possible states for that system. And when we use the term “state space representation” that is saying, lets re-write the dynamics model in a special form as a set of inputs, outputs, and state space variables.

A common class of dynamical systems modeling are ordinary differential equations (ODE). In simplest terms, an differential equation is any equation which contains it’s own derivative. The term ordinary mean it only contains one independent variable (position for example). (This is as opposed to partial differential equations which can contain more than one independant variable.).

You probably are already using ODE’s in physics you just don’t know it. If you remember from physics, the derivative of position, is velocity, and the derivative of velocity is acceleration.

If we call position x, then velocity is the 1st derivative of x, and we can re-write velocity as x_dot (where the dot means 1st derivative, and is a short hand notation to make writing faster, similarly, the first derivative of velocity is acceleration and we can write that accel as v_dot. But we can go even one step further. We realize acceleration is the first derivative of velocity, which in turn is just the first derivative of position. This means acceleration is the second derivative of position, and we can write acceleration as x_dotdot. (where double dot means second derivative. You can keep going with 3 dots, 4 dots, etc to mean 3rd, 4th derivative and so on if you need to).

Using this information, we can we-write the equations of projectile motion as ODEs. With g=9.8m/s^2

X_final = x_inititial + x_initial_dot*t*cos(theta)
Y_final = y_initial + x_initial_dot*t*sin(theta) - 4.5t^2


So as you can see, we use ODE’s to model dynamical systems. The state represents everything we need to know in order to predict how that system will behave, and now we write these equations in a special form called “state space representation” and all this allows us to do is make it easier to work with these equations when solving them simultaneously. Putting them in "state space representation" allows us to re-write these equations in matrix from. There is a part of math called Linear Algebra which introduces matricies, and matricies allow solving system of equations where you need to solve equations simultaneously much easier. Thans why control guys use it. Austin did a good job explaining how to do this (by writing each equation as a 1st order ODE and then putting it in matrix form).

Once in state space form your model is done and you can start using it to test different type of control algorithms on it. There is a special case of dynamic models called LTI. This is linear-time invariant models. Most systems can be approximated by making assumptions to model the system as an LTI system. For example to get the projectile motion equations above we assumed acceleration is constant, and ignored air drag. There are always assumptions you can make to simplify the model. The reason we like to simplify the model is because it makes the math easier to solve and we can get to a solution much faster. The downside is, as you make these assumptions your mathematical model doesn’t truly represent the physical system. There is a medium where you can make a “good enough” model where you make the right assumptions to ease calculations, but don’t go overboard and not capture all of the dynamics of the system. That is the true art of dynamic system modeling.

The term LTI really just means this: If I have a system and just look at it from an input/output perspective (I put a signal in, and observe the signal out). The system must have two properties:

The first, The input output relationship must not be dependent on time (time invariant). This means if I put a signal into a system, and get one output, I must be able to come back tomorrow, the next day or 10 years from now, put that exact signal in, and get the same exact output. If there is ever another time, where the signal is different, then the system is not time invariant.

The second, if I have two different inputs, A and B, and add them together to create a new input C that is the summation of both old ones. Then the output of the system of this new signal C must be such that it is the summation of the individual outputs of A and B had I ran those signals through the system independently. This is called the superposition property.

If a system has these two properties, it is called an LTI system and a world of cool techniques can be used to design controls for them. If not, then another harder unforgiving world of non-linear techniques needs to be used. So most of the time we try to make the model LTI, and try to understand when the system doesn’t fit the LTI model because it makes life easy.


This brings me to control systems as a whole. The control system is another system that one uses to tame the plant and get it to operate the way one would like automatically. Basically we create a system that understands the behavior of our plant, and we use this system to provide the proper input to our plant to make it do what we want. Having a DC motor maintain a constant speed for example. The “control system” itself is just another algorithm, but instead we design the dynamics of it, to take an input, and use the output to drive the plant to where we want it. When we design the dynamics of a controller, The controller is a particular algorithm with paramters we can tweak to change its behavior. When we tweak these parameters thats when we say we are "tuning the gains" of the controller. As we modify these paramters of the controller, we are changing its dynamics to control our plant the way we want. For state space you tune the paramters by using pole placement, for PID the paramters you tune are the proportional, intergral, and derivative gains. Other controllers have other paramaters you tune to achive to modify the control system dynamics and get the proper response out of the plant model.

There are many different types of control algorithms, each have their pros and cons.A few control algorithms are PID, state feedback, LQR, lead-lag, H_infinity, MPC, bang-gang, take-back half…etc. The list goes on. Each of these systems has different ways of changing its dynamics, so that its output can be used to drive the plant in a known way and help you, the control engineer, achieve the desired effect. Most controllers, (lets call them control laws) fall into two categories: state feedback, and output feedback.

State feedback, is when you try to measure all of the states of the plant model and provide it to the controller (position, velocity) etc. And use that information to create a state feedback controller.

Output feedback is when you measure the output of the plant and use only that information. For example the speed of a dc motor using a tachometer, or the position of an arm using a pot. And feed that back to a control to create an output feedback controller.

Each of these methods have pros and cons, for example in State Space you may not always be able to measure all of the states, but as we said before, you need all of the states to determine the motion of the system, so sometimes you need to make an estimator, which estimates a particular state. PID for example you need to calculate the integral every term which introuduces the intergral windup problem, and the derivative term which is very susecptible to noise. So if your sensor has noise in it, like a jumping reading from a sensor, then the control output will jump around too due to the derivative noise and that is not good at all. All of these problems have solutions for both controllers, they just need to be addressed.

In addition some of them scale very well to the non-linear systems, others don’t. The state space controller presented is one which doesn’t expand well to non-linear systems, the PID is an algorithm that does expand a bit easier. It is safe to say that all physical systems are non-linear, but you are trying to keep the system behaving in a linear controllable way. Sometimes this is not possible and you need to venture out into the non-linear control world. I would tend to agree with Austin however, for Robotics, you can probably never have to worry about the non-linear stuff and can keep all controllers linear with linear plant models and achieve desired results.

Then this leads to the type of control you want to achieve, you heard me throw around the term "regulator" type system in a previous post. There are two types of major systems you want to achieve, either a regulator, which is a system that you just want to push to another state, and a tracker system which you want the output to track the input. An example of a regulator system could be the posotion control of a motor. If the motor is at rest and you want it to go to another position and stop, you are trying to push it to another state. An example of a tracker type of system is speed control of a motor, you want the motor to track the input speed you are providing.

After you design the controller and simulate the input/output behavior, you extract the controller out of your model and implement in software, or hardware to drive your physical system. If your model was good enough, then the controller you designed in the simulation world, should behave very closely in the real world, with maybe minor tweaks. Typically when you start programming the controller, it takes on the form of a mathematically algorithm. You can convert it into a difference equation (like a differential equation, but instead of time being continuous, time is discrete, because electronics all work in discrete time) and you can program that into software. Certain controllers you can implement in hardware, PID is famous because it is easy to understand, easy to write in software, and you can even build a hardware implementation using RC circuits. Most other controllers, just implement in an FPGA and your good to go!

Hope this helps,
Kevin
__________________
Controls Engineer, Team 2168 - The Aluminum Falcons
[2016 Season] - World Championship Controls Award, District Controls Award, 3rd BlueBanner
-World Championship- #45 seed in Quals, World Championship Innovation in Controls Award - Curie
-NE Championship- #26 seed in Quals, winner(195,125,2168)
[2015 Season] - NE Championship Controls Award, 2nd Blue Banner
-NE Championship- #26 seed in Quals, NE Championship Innovation in Controls Award
-MA District Event- #17 seed in Quals, Winner(2168,3718,3146)
[2014 Season] - NE Championship Controls Award & Semi-finalists, District Controls Award, Creativity Award, & Finalists
-NE Championship- #36 seed in Quals, SemiFinalist(228,2168,3525), NE Championship Innovation in Controls Award
-RI District Event- #7 seed in Quals, Finalist(1519,2168,5163), Innovation in Controls Award
-Groton District Event- #9 seed in Quals, QuarterFinalist(2168, 125, 5112), Creativity Award
[2013 Season] - WPI Regional Winner - 1st Blue Banner

Last edited by NotInControl : 03-06-2014 at 14:14. Reason: clarified some of the sentences
Reply With Quote