|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#16
|
|||
|
|||
|
Re: 971's Control System
Quote:
Quote:
Since robots work in discrete time, I like to do all my controls math in discrete time. I think that makes it easier to explain. Lets define a simple system as follows. x(n + 1) = a x(n) + u(n) Lets let x(0) = 1, and u(n) = 0, and look at how the system responds. x(1) = a * x(0) x(2) = a * x(1) = a * a * x(0) x(3) = a^3 x(0) x(n) = a^n x(0) We can notice something interesting here. If |a| < 1, the system converges to 0 and is stable. What if a = 2, and we can define u(n) = f(x(n))? For a CCDE (constant coefficient difference equation), LTI (defined below) means that the coefficients are constant. Lets let u(n) = -k * x(n) x(n + 1) = a x(n) - k * x(n) = (a - k) x(n) Given our knowledge above, we can compute the set of all k's that make our system stable. k is in (1, 3). Since life is always more fun when linear algebra shows up, lets let X be a matrix instead of just a value. X(n + 1) = A * X(n) + B * U(n) If we do the same trick as for the scalar above, we get the same result. This means that we care that A^n decays to 0 as n -> inf. If we diagonalize the matrix, we can rewrite it a (P^-1 D P)^n -> P^-1 D^n P. Since D is diagonal, D^n is just the diagonal terms ^n. This means that our system is stable if all the elements on the diagonal have a magnitude < 1. Turns out these values have a name! They are the eigenvalues of A. Therefore, we can say that if the eigenvalues of A are inside the unit circle, the system is stable. Lets try designing a controller. U(n) = K * (R(n) - X(n)) R(n) is our goal. X(n + 1) = A X(n) + B K (R(n) - X(n)) X(n + 1) = (A - BK) X(n) + B K R(n) So, we can use fun math to design a K such that the eigenvalues of A - BK are where we want them, and the system responds like we want it to. This is pretty awesome. We can finally model what our control loop is doing. Unfortunately, as Kevin was talking about above, this assumes that we know the state of our system. X(n) is the state of our system at t=n timesteps. Ah, but you say, we can determine the velocity by taking the change in position over the last cycle, and divide that by the time! Unfortunately, that will instead compute the average velocity over the last cycle, not the current velocity. When you are moving really fast, that delay actually is a big deal. We can define another controller who's whole job is to estimate the internal state. Let me introduce another equation and variable. Let Y be the measurable output. Y(n) = C X(n) + D U(n) For most robot systems, D is 0, but I like to include it for completeness. Lets define Xhat to be our estimate of the state. Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - Yhat(n)) Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n)) Lets try to prove that the observer converges, i.e. X(n) - Xhat(n) -> 0. X(n + 1) - Xhat(n + 1) = A X(n) + B U(n) - (A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n))) E(n + 1) = A (X(n) - Xhat(n)) + L C Xhat(n) + L D U(n) - L C X(n) - L D U(n) E(n + 1) = A (X(n) - Xhat(n)) - L C (X(n) - Xhat(n)) E(n + 1) = (A - LC) E(n) Yay! This means that if the eigenvalues of A - LC are inside the unit circle, our observer converges on the actual internal system state. In practice, what this means is that we have another knob to tune. You can think of this as 2 steps. Xhat(n + 1) = A Xhat(n) + B U(n) is our predict step. Notice that this uses our model, and updates the estimate given the current applied power. L (Y(n) - Yhat(n)) is our correct step. It corrects for an error between our estimate and what we just measured. If we set the eigenvalues of A - LC to be really fast (close to 0), we are trusting our measurement to be noise free, and if we set them to be slow, we are trusting our model to be accurate. If you have a nth order CCDE, or nth order differential equation, you can instead write it as N coupled first order equations. This property lets us express an arbitrarily complex system in the form shown above. I also like to design all our systems in continuous time and then use a function that I wrote to convert the nice clean continuous time model to the discrete time version used above. Robots compute things at discrete times, and it isn't quite right to pretend that everything is really continuous and to use the continuous equations instead. Quote:
For the most part, it doesn't cause many problems. Most of the non-ideal behavior on a motor in FRC is no longer worth worrying about. With the old Victors, 50% power would result in close to full speed, and that caused me all sorts of problems. Controls guys love to throw around the term Linear Time Invariant (LTI). LTI means that the system is "linear" and "time invariant". Time invariant is easy to understand. If I apply u(t) to my system starting at t=1, I get the same response as if I apply u(t) to my system starting at t=2, all assuming that the system is in the same initial state at the start time. Linear means that the following holds. F(2 * u(t)) = 2 * F(u(t)), and F(u1(t) + u2(t)) = F(u1(t)) + F(u2(t)). In practice, this means that your system can be defined as a set of ordinary differential equations. When I design a control system, I try very very hard to avoid doing non-LTI things. I don't like doing things like writing U(t) = x^2, since that isn't linear. (For fun, use my definitions above to show that that isn't LTI ) I don't like dealing with integral windup, since it is very hard to properly model the solutions, since they aren't LTI.For a DC motor, I like to use the following definition. My Kt and Kv may be inverted from what people typically use, but the math still works... You can think of a DC motor as a generator in series with a resistor. The torque is generated by the magnetic field in the generator, which is proportional to the current through the coils. The BEMF voltage is the voltage that the generator is generating. Lets put some math to this. V is the voltage applied to the motor. I is the current that goes through the motor. w is the angular velocity of the motor. J is the moment of inertia of the motor and it's load. dw/dt is the derivative (rate of change) of the angular velocity of the motor, which is the angular acceleration of the motor. V = I * R + Kv * w torque = Kt * I V = torque / Kt * R + Kv * w torque = J * dw/dt dw/dt = (V - Kv * w) * Kt / (R * J) You probably won't recognize this, but this is a classic exponential decay. If we are modeling the system as only having velocity and ignoring position, it has 1 derivative, and is therefore a first order system. This means that it has 1 state. Depending on how critical the response is, we'll either pull the mass/moment of inertia out of CAD/guess, or we'll command the real robot to follow a step input (t > 0 ? 12.0 : 0.0) and record the response. We then pass that same input into our model, and plot that next to the real robot response. We then tweak the coefficients until the responses match pretty well. I like to bring up a new system by limiting the applied power to something that I can overpower with my hands (+- 2 volts was our claw limit this year), and then I hold on and feel the response. This makes it easy to check for sign errors and to feel if the response is smooth and stiff or jittery without the robot thrashing or crashing itself. After it passes that check, I'll let it run free and slowly give it more and more power. Most of 971's systems can crash themselves faster than we could disable them, and even if we could disable one, it would still coast into the limit pretty hard. I'd recommend that you pull down our 2013 code (or 254's code) and look at the python used to design the control loops. I'd recommend taking a look at the indexer code from 971's last year's code as a simple state feedback loop. Play with it and learn how it works. Then, try modeling something that was on your robot this year, and stabilize that. |
|
#17
|
|||
|
|||
|
Re: 971's Control System
Quote:
Quote:
For Robotics, we are mainly interested in modeling and controlling dynamical systems. The term dynamical can be thought to mean a system which does not change instantaneously when acted on. For example, a ball (the system) when kicked (the action) doesn’t instantaneously move to the new location, it takes some time to move. The time it takes, how it moves (through the air, rolling on the ground, etc.) and the distance it moves is all considered the dynamics. When we mathematically model a “plant” to create the “plant model”, we mathematically describe the motion of that system. Lets work with an example to bring these ideas to life. Let's say we want to control the position of a projectile. A simple model of a projectile would be that of a projection in two dimensions. If you remember from physics, the x and y distance of a projectile in 2 dimensions can be described as (assuming acceleration is constant): X_final = x_inititial + v_initial*t*cos(theta) Y_final = y_initial + v_initial*t*sin(theta) - ½ gt^2 Where theta is the launch angle, x_initial is the initial x displacement, y_initial is the initial y displacement, and V_initial is the initial velocity. These equations tell you everything about the motion of the projectile (neglecting air resistance). If you know the following parameters, such as starting x, y, theta, and velocity at time = 0, you can determine the position of the projectile for any future time t>0. In order to solve the above, we need to solve both of the equations simultaneously. The above is called a system of equations and all dynamical systems can be modeled using a set of equations like the above. This leads us to the definition of “state” in control systems. The “state” of a dynamical system is the collection of all variables that completely characterize the motion of the system to predict future motion. This is very important in control systems because what is means is, what you know the state of a system, you can completely describe all of its behavior, and if you want to control it, understanding how it behaves is a must. Every model, there is a set of variables, which once you know all of it, you can completely mathematically predict the behavior of the system. These variables have various categories, inputs (which you use to drive the system), initial conditions(which determine the resting state of the system at t=0), outputs(response of the system based on the system dyanmics). From our example, the position of the ball (x, and y location) and the velocity of the ball, is all that we need to plug into the dynamic equations to solve it. In this example, position, and velocity are called the state variables, because thats all we need. (Theta is apart of velocity because velocity is a vector which has both magnitue and direction. The magnitue is the velocity number in m/s, the direction is theta.) For our example, you can assume velocity is an input, the initial x, and y positions are initial conditions, and the final X and Y locations are the outputs. Depending on what you are trying to control or solve, the state variables can be different. When we say “state space” that means the set of all possible states for that system. And when we use the term “state space representation” that is saying, lets re-write the dynamics model in a special form as a set of inputs, outputs, and state space variables. A common class of dynamical systems modeling are ordinary differential equations (ODE). In simplest terms, an differential equation is any equation which contains it’s own derivative. The term ordinary mean it only contains one independent variable (position for example). (This is as opposed to partial differential equations which can contain more than one independant variable.). You probably are already using ODE’s in physics you just don’t know it. If you remember from physics, the derivative of position, is velocity, and the derivative of velocity is acceleration. If we call position x, then velocity is the 1st derivative of x, and we can re-write velocity as x_dot (where the dot means 1st derivative, and is a short hand notation to make writing faster, similarly, the first derivative of velocity is acceleration and we can write that accel as v_dot. But we can go even one step further. We realize acceleration is the first derivative of velocity, which in turn is just the first derivative of position. This means acceleration is the second derivative of position, and we can write acceleration as x_dotdot. (where double dot means second derivative. You can keep going with 3 dots, 4 dots, etc to mean 3rd, 4th derivative and so on if you need to). Using this information, we can we-write the equations of projectile motion as ODEs. With g=9.8m/s^2 X_final = x_inititial + x_initial_dot*t*cos(theta) Y_final = y_initial + x_initial_dot*t*sin(theta) - 4.5t^2 So as you can see, we use ODE’s to model dynamical systems. The state represents everything we need to know in order to predict how that system will behave, and now we write these equations in a special form called “state space representation” and all this allows us to do is make it easier to work with these equations when solving them simultaneously. Putting them in "state space representation" allows us to re-write these equations in matrix from. There is a part of math called Linear Algebra which introduces matricies, and matricies allow solving system of equations where you need to solve equations simultaneously much easier. Thans why control guys use it. Austin did a good job explaining how to do this (by writing each equation as a 1st order ODE and then putting it in matrix form). Once in state space form your model is done and you can start using it to test different type of control algorithms on it. There is a special case of dynamic models called LTI. This is linear-time invariant models. Most systems can be approximated by making assumptions to model the system as an LTI system. For example to get the projectile motion equations above we assumed acceleration is constant, and ignored air drag. There are always assumptions you can make to simplify the model. The reason we like to simplify the model is because it makes the math easier to solve and we can get to a solution much faster. The downside is, as you make these assumptions your mathematical model doesn’t truly represent the physical system. There is a medium where you can make a “good enough” model where you make the right assumptions to ease calculations, but don’t go overboard and not capture all of the dynamics of the system. That is the true art of dynamic system modeling. The term LTI really just means this: If I have a system and just look at it from an input/output perspective (I put a signal in, and observe the signal out). The system must have two properties: The first, The input output relationship must not be dependent on time (time invariant). This means if I put a signal into a system, and get one output, I must be able to come back tomorrow, the next day or 10 years from now, put that exact signal in, and get the same exact output. If there is ever another time, where the signal is different, then the system is not time invariant. The second, if I have two different inputs, A and B, and add them together to create a new input C that is the summation of both old ones. Then the output of the system of this new signal C must be such that it is the summation of the individual outputs of A and B had I ran those signals through the system independently. This is called the superposition property. If a system has these two properties, it is called an LTI system and a world of cool techniques can be used to design controls for them. If not, then another harder unforgiving world of non-linear techniques needs to be used. So most of the time we try to make the model LTI, and try to understand when the system doesn’t fit the LTI model because it makes life easy. This brings me to control systems as a whole. The control system is another system that one uses to tame the plant and get it to operate the way one would like automatically. Basically we create a system that understands the behavior of our plant, and we use this system to provide the proper input to our plant to make it do what we want. Having a DC motor maintain a constant speed for example. The “control system” itself is just another algorithm, but instead we design the dynamics of it, to take an input, and use the output to drive the plant to where we want it. When we design the dynamics of a controller, The controller is a particular algorithm with paramters we can tweak to change its behavior. When we tweak these parameters thats when we say we are "tuning the gains" of the controller. As we modify these paramters of the controller, we are changing its dynamics to control our plant the way we want. For state space you tune the paramters by using pole placement, for PID the paramters you tune are the proportional, intergral, and derivative gains. Other controllers have other paramaters you tune to achive to modify the control system dynamics and get the proper response out of the plant model. There are many different types of control algorithms, each have their pros and cons.A few control algorithms are PID, state feedback, LQR, lead-lag, H_infinity, MPC, bang-gang, take-back half…etc. The list goes on. Each of these systems has different ways of changing its dynamics, so that its output can be used to drive the plant in a known way and help you, the control engineer, achieve the desired effect. Most controllers, (lets call them control laws) fall into two categories: state feedback, and output feedback. State feedback, is when you try to measure all of the states of the plant model and provide it to the controller (position, velocity) etc. And use that information to create a state feedback controller. Output feedback is when you measure the output of the plant and use only that information. For example the speed of a dc motor using a tachometer, or the position of an arm using a pot. And feed that back to a control to create an output feedback controller. Each of these methods have pros and cons, for example in State Space you may not always be able to measure all of the states, but as we said before, you need all of the states to determine the motion of the system, so sometimes you need to make an estimator, which estimates a particular state. PID for example you need to calculate the integral every term which introuduces the intergral windup problem, and the derivative term which is very susecptible to noise. So if your sensor has noise in it, like a jumping reading from a sensor, then the control output will jump around too due to the derivative noise and that is not good at all. All of these problems have solutions for both controllers, they just need to be addressed. In addition some of them scale very well to the non-linear systems, others don’t. The state space controller presented is one which doesn’t expand well to non-linear systems, the PID is an algorithm that does expand a bit easier. It is safe to say that all physical systems are non-linear, but you are trying to keep the system behaving in a linear controllable way. Sometimes this is not possible and you need to venture out into the non-linear control world. I would tend to agree with Austin however, for Robotics, you can probably never have to worry about the non-linear stuff and can keep all controllers linear with linear plant models and achieve desired results. Then this leads to the type of control you want to achieve, you heard me throw around the term "regulator" type system in a previous post. There are two types of major systems you want to achieve, either a regulator, which is a system that you just want to push to another state, and a tracker system which you want the output to track the input. An example of a regulator system could be the posotion control of a motor. If the motor is at rest and you want it to go to another position and stop, you are trying to push it to another state. An example of a tracker type of system is speed control of a motor, you want the motor to track the input speed you are providing. After you design the controller and simulate the input/output behavior, you extract the controller out of your model and implement in software, or hardware to drive your physical system. If your model was good enough, then the controller you designed in the simulation world, should behave very closely in the real world, with maybe minor tweaks. Typically when you start programming the controller, it takes on the form of a mathematically algorithm. You can convert it into a difference equation (like a differential equation, but instead of time being continuous, time is discrete, because electronics all work in discrete time) and you can program that into software. Certain controllers you can implement in hardware, PID is famous because it is easy to understand, easy to write in software, and you can even build a hardware implementation using RC circuits. Most other controllers, just implement in an FPGA and your good to go! Hope this helps, Kevin Last edited by NotInControl : 03-06-2014 at 14:14. Reason: clarified some of the sentences |
|
#18
|
|||
|
|||
|
Re: 971's Control System
Okay! Thank you so much, guys. I really appreciate all the effort that clearly went into those posts, and I hope it can help many students to come. Definitely got me interested in a more softwarey path in school again.
NotInControl, your post really gives a good overview without me having to worry so much about the math I don't understand. That was super helpful, I much better understand what I'm doing now. Austin, rereading your post after NotInControl's really gave me some awesome insight. It really gives a good overview of the involved complexity in designing a system. I'll look into some math, and I'm sure that will help my learning process with all this controls stuff. So, a few more questions, which require an oversimplified explanation of my understanding: 1.You make a mathematical model of your system using all relevant states. It needs to be linear and time invariant or the math is even harder. 2. You use this to test your code which uses the variables you defined to achieve an output. You tune constants here. This is where I have questions. do you create some kind of testing code? Is hardware(sensors, even mock-ups of robot parts with similar moments?) involved? 3.Then you go to the real robot, and hopefully, its close enough to your model, and everything is peachy. |
|
#19
|
|||
|
|||
|
Re: 971's Control System
Quote:
Quote:
For my example above, lets design a controller and write a test for it. The plant is x(n + 1) = 2 x(n) + u(n), and we are trying to stabilize it to 0. Code:
#include <gtest/gtest.h>
#include <gtest/gtest.h>
class Plant {
public:
Plant(double x0) : x_(x0) {}
void Iterate(double u) {
x_ = x_ * 2.0 + u;
}
double x() const { return x_; }
private:
double x_ = 0;
};
double Controller(double x) {
return -x * 1.5;
}
TEST(Stabilize, StabilizeToZero) {
Plant p(1);
for (int i = 0; i < 100; ++i) {
double u = Controller(p.x());
p.Iterate(u);
}
EXPECT_NEAR(0, p.x(), 0.001);
}
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
For a more complicated plant, you could imagine simulating other simple sensors. A hall effect sensor on our claw can be simulated as something that turns on when the claw is between 2 angles. Tests are also great for testing that other simple functions do what you expect them to do. Wrote a nice function to constrain your goal? Write a test to make sure that it handles all the corner cases correctly. Once you write it once and automate it, it lives forever and will continue to verify that your code works as expected. Quote:
As long as you don't crank up the gains in your simulation, the gain and phase margin of your loops should be large enough that it will be stable. The gain margin is the amount of gain error (robot responds with more or less torque than expected, for example) that you can tolerate before going unstable. The phase margin is the amount of delay error that you can tolerate before going unstable. The initial tunings are normally not bad, but I'm a perfectionist. For an extra 1/2 hour of work, the tuning of the loops can be significantly improved over the initial tuning. I watch and listen to the physical response and look for overshoot, slow response, oscillation, etc and tune them out by adjusting the time constants. It also helps to record a step response and plot it. After having tuned enough loops over the years, you get a feeling for what speed poles work for FRC robots, which helps your initial guess. FSFB gives you knobs in both the observer to filter out noise, and the controller to control your response. |
|
#20
|
|||
|
|||
|
Re: 971's Control System
Quote:
Quote:
The reason LTI models were so important was because back then, they didn’t have dedicated computers to crunch all the numbers and calculate all of the system of equations. The first control systems were mechanical devices, like the first governors for car engines, they didn’t rely on circuits at all. Keeping the dynamics linear and the number of equations low meant people could solve them by hand or use simple calculators. This is why the number of studies and tools done in control systems to date were done based on LTI models, that’s all the majority could work with. Very complicated systems were broken down using multiple simple LTI models. Today there is a new branch of control systems called modern control systems which venture out into designing more robust, sophisticated controllers because we have new technology at our disposal and computers which can calculate very complicated equations for us. It just turns out that the people in the 40’s to now were very smart, and the tools and principles they developed for LTI systems, can still be used today with high confidence for most applications. Today we have computers which can solve differential equations for us. And software applications to help us design control systems. Just like CAD people have solidworks to create models, control guys have a software package called Matlab/Simulink which one of its many specialties can be used to design control systems with an add-on called the control system toolbox. This tool box allows you to enter the state space model of your plant, graphs its behavior, add simulated control systems, test then out, and design the controller virtually, before you settle on a design. You use the simulation world to test/tune your controller before you enter the software world and program it. Once you are happy with the controller in the virtual space, you can do one of two things, you can extract your final control parameters from the model, convert it to difference equations, and code up that controller to then drive your physical system. Or you can use another toolbox in matlab to generate the code for you.. but this may lead to more trouble than good if you are not completely comfortable with auto-generated code. A majority of industries use auto-generated code from Matlab or other software packages and run them to belive it or not. I am old school in this fashion and I like to hand-write all of my own control algorithms. Today we use these tools to develop richer models of plants. For example, for my PhD I am working on an autonomous car controller. The plant model for this system and all of its dynamics is 16 equations that need to be solved simultaneously. The controllers in Airplanes used to automatically take off, land, and fly are orders higher. These models are highly non-linear, and without the help of these software packages and calculators, attempting to hand calculate them would be too difficult for anyone to try. Using these software tools today allow us to be better control engineers. There are many different ways you can approach the steps which you design a control system. What I like my students on my team to do is create a model of the plant and use Matlab/Simulink to test the plant and develop a controller for the system. They use matlab to test/tune the system until I am happy with the response. After we are done, we take the parameters we settled on in the simulation world and use those parameters to code a software implementation of the controller in java or c++ for example. We run that on our robot and then slowly tweak the parameters in code to further get the “perfect” results. Those steps above are where control engineers differ. Some like to use tools like Matlab, some like to do hand calculations, some like to write their own calculators. There is a whole field of different ways you can approach control system design. It appears Austin and his team have designed a lot of cool tools to help them achieve the controls they want for their robot and have a very well established programming and control model. The below is a high level view of how I approach control system design for robotics: 1. Model the System dynamics by hand. As you stated before. (I write out the equations) 2. Re-create the model in Matlab/Simulink. Use matlab/Simulink to design a controller and test the overall system in a virtual world. This requires no hardware at all, just the software, your knowledge of control systems, and the dynamics of your system. 3. Program the control system to live on the RIO. I first test the controller code written independently so that I know the algorithm works as intended. (In the end it is just a function or a class which takes in an input and spits out an output used to drive motors or something). For example I test to make sure the controller code outputs what it is supposed to when I give it a particular known input. Once I get the bugs out, I move to the next step. 4. Test that the sensors on the robot are working and give me the signals I expect and that I am assuming the proper units in my control algorithms. 5. Run the controller code on the robot, and allow it to control the hardware and keep my hand near the disable button incase things get out of control This step has to be done carefully, because you are trusting your algorithm, and if its written incorrectly, you can start to break things on the bot.6. Assuming things in step 5 are fine, tweak the gains ever so slightly in the code to get a faster response, reduce overshoot, etc. and make the overall performance better. 7. Done! Sit back and marvel at your creation! Try and win some banners, or bring home some control awards! Hope this helps, feel free to ask any other questions, Kevin Last edited by NotInControl : 03-06-2014 at 15:43. |
|
#21
|
|||
|
|||
|
Re: 971's Control System
Quote:
|
|
#22
|
||||
|
||||
|
Re: 971's Control System
I found a detailed set of video lectures on this topic: http://www.telerobotics.utah.edu/ind...teSpaceControl
Also, not that I would condone this, but if you google the name of the class's text, "Linear System Theory and Design, Third Edition, Chi-Tsong Chen", a pdf download may be available. Can't wait to see your code, and your electrical system at IRI. Last edited by Bryce Paputa : 03-06-2014 at 16:09. |
|
#23
|
||||||||
|
||||||||
|
Re: 971's Control System
A couple corrections/clarifications. Sorry for taking so long; I didn't realize more stuff got posted (still learning how to use CD)...
Quote:
Quote:
Also, despite using UDP, we still have issues with stale values because the bridge queues packets going between wired ports when it has trouble with the wireless communications (sounds weird, but it does). Quote:
Quote:
Quote:
Quote:
Hitting hard stops isn't as big of a deal because we always make sure to zero at low enough power that it can stall for several seconds without breaking anything (until somebody disables it). However, it is still nice to not have to worry about what position things start in. Quote:
Quote:
|
|
#24
|
|||
|
|||
|
Re: 971's Control System
To be even more pedantic, I've seen the BBB also not come up on the field...
|
|
#25
|
|||
|
|||
|
Re: 971's Control System
Awesome! Thank you so much guys. I've heard the words "Matlab and Simulink" thrown around, but I never really understood what they are. That's really cool. Controls sounds like a much more interesting field to me now. I hope I get the oppurtunity to do something like this next year, but even if not, this is a really great learning expierience for me. Thanks!
|
|
#26
|
|||
|
|||
|
Re: 971's Control System
I'm not sure if I missed it but I have one basic quesiton
How do you determine the model of the Motors and other devices which you have to purchase? To be clear, I'm wondering about the equipment which you aren't designing such as CIMs. |
|
#27
|
|||
|
|||
|
Quote:
The long answer: A DC motor is an electo-mechanical device which converts electrical current to mechanical torque. In order to fully model the DC motor you need to model the dynamics of both the mechanical side, and the electrical side of the motor. What links the electrical model to the mechanical model is current. The torque provided out of the motor is proportional to the armature current determined by the electrical model and input voltage. The electrical side of the DC motor can be idealized as a simple RL circuit. The mechanical side can be idealized as a spinning inertia load. You can also expand your model to include a gearbox on the mechanical side of your model. When you create the mathematical model, which is just a set of math equations, you create it generically, where it is not tied to a specific design, but rather based on certain parameters which you can measure from your design and plug it in. Similar to the example I explained above about the projectile in 2 dimensions. The math equations are based on mass. It is not tied to a specific object, you can plug in the mass for any object and have a working model of that object under the same assumptions. This makes a robust model you can re-use over and over again, by just tweaking the parameters to match your new physical system. We do something very similar for the DC motor model. The most basic models of DC motors all yield the same results and are based on the following physical parameters: %Mechanical Parameters (J) moment of inertia of the rotor kg.m^2 (b) motor viscous friction constant N.m.s (Kt) motor torque constant N.m/Amp %Electrical Parameters (Ke) electromotive force constant V/(rad/sec) (R) electric resistance Ohm (L) electric inductance Henry This model can be used to model any DC motor with the same simplified assumptions. If you want to apply this to a CIM motor, you can pull all of these values from the spec sheet or by directly measuring them from your motors in hand and plugging them into the model. Once you have those parameters, you have a model of your CIM motor system. If you want to add a model of a gearbox, simply add a proportionality constant which modifies the output speed/torque by the gearbox ratio. Here is where I get most of my specs for FRC motors from, I usually measure the R and L parameters with a handheld meter: http://banebots.com/p/M4-R0062-12 Also here is a rudimentary explanation of the math to create a DC model from MathWorks. http://www.mathworks.com/help/contro...r-control.html I have a whitepaper I made a few years back which I can upload which goes into further detail about modeling a DC motor with gearbox and dynamical load, and accounting for torsional stiffness, and load dynamics, yada yada. I'll try to find it, polish it, and upload it to chief. I hope this helps, Kevin Edit: I wish there were a way to write proper equations on chief Delphi, I could just write out the math behind these explanations. Last edited by NotInControl : 11-06-2014 at 00:58. Reason: Typo... Thanks Ether |
|
#28
|
|||
|
|||
|
Re: 971's Control System
That does help, and I look forward to your white paper. Thank you. Gathering the physical parameters is the part I've been trying to figure out, since I don't have access to a system analyzer like I did in college.
|
|
#29
|
|||
|
|||
|
Re: 971's Control System
Quote:
The techniques I have used in the past to successfully characterize different motors are largely based on the procedures documented here. It's helpful when no spec sheet exists or you want true system performance. http://support.ctc-control.com/custo...Parameters.pdf Last edited by NotInControl : 10-06-2014 at 20:08. |
|
#30
|
|||
|
|||
|
Re: 971's Control System
Also...
In the past I have used a CAN Jaguar to measure the motor current draw while measuring speed of the shaft using an encoder which assisted in creating a realistic model. With the new RoboRio and CAN PDP, I am sure I will be doing much more of this in the coming seasons. Just this past weekend we were successfully able to read current draw of a Talon driving a CIM under load from the new PDP board over CAN in Java and C++. Regards, Kevin |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|