971's Control System

In March I had the pleasure of being a Robot Inspector at the Sacramento Regional. One of the Robots I had the pleasure of inspecting was 971’s Mammoth!

Let’s just say, they do things a bit differently than your standard robot when it comes to their control of the robot. Sure, they have the requisite cRIO, DSC, PDB, radio etc. but that’s where it ends. They also have custom built circuit boards, custom sensors that you can now buy, and I believe offboard processor(s) just to name a few.

I’m inviting Austin Schuh, and anyone else associated with 971, to “open the Kimono” a little and let the rest of CD learn from their innovative approach to controlling a robot.

5 Likes

First question:

Austin mentioned the use of the SPI bus.

What data did it carry. Where was it sourced and where did it end up?

Second question:

Mammoth’s shooter system had two moving assemblies: Upper jaw with roller, Lower jaw with rotating “tusks”.

Were there controlled independently and synchronised, or did one move with respect to the other?

My question is: What is the purpose of the beaglebone cape? I am guessing that its to raise the voltage of the DIO from 3.3 to 5 volts? Did I see that correctly, there is a gyro on that board too? Lastly, for the record, what was the final count of sensors on mammoth?

Looks like I am gonna by a bunch of hall effect sensors, very cool.

Here’s a bunch of details about our control system. It’s kind of a lot, but I’ve read enough CD threads to know that somebody’s going to want each detail (and it’s fun to share), so here’s my attempt to write it all down at once:

The cape handles reading all of our sensors (except the pressure switch because of the rules :frowning: ) and providing power for the BeagleBone Black. It gets regulated 12V from an external boost/buck converter. We have an MCU on the cape (an STM32F2XX) for counting encoder ticks and capturing encoder values on the edges of digital input signals. It also handles analog inputs and offloads integrating the gyro to provide us with a more robust integration. The MCU uses SPI to talk to the ADC and gyro (an ADXRS453). It sends all of the data to the BeagleBone Black over TTL serial ~500 times a second. We upgraded to serial with a custom resyncing protocol this year from USB so that any missed data will not cause long dropouts like we were seeing with USB and there’s no USB cable to come unplugged or break. We used the STM32F2XX primarily because it can do a lot of the encoders in hardware which lets us handle more counts than interrupts would. We handle the rest of the encoders and edge captures with interrupts. We had reliability issues with our custom sensor-reading board last year, and the BBB is electrically fragile, so we designed in a lot of protections against electrical noise. We have differential line receivers (AM26LV32EIPWR) on all of the digital inputs and op-amp buffers on all of the analog ones. Also, we used a separate ADC (vs the ones in the MCU), and all of the logic exposed to the robot (line receivers, ADC, and the power pins on the inputs) is powered by a separate buck 5V regulator from the MCU, gyro, and BBB. This means that noise injected on the power or signal lines for any sensor doesn’t affect the BBB power, and any shorts with the sensors don’t take the BBB or MCU down.

Overall sensor count: 5 encoders, 10 digital hall effects, 1 sonar, 1 PWM pulse width from the side car, 4 analog hall effects, 1 potentiometer, and 1 battery voltage monitor. The 5 encoders are all quadrature Grayhill 63Rnnn’s (2 drivetrain, 2 claws, and 1 shooter). The 10 digital hall effects are designed by 971 and sold by WCP (3 for each claw and 4 on the shooter). The sonar was a clever idea for catching that didn’t pan out and was removed. We have one of the PWM outputs from the digital sidecar attached to our custom electronics so we can detect when the cRIO stops sending motors outputs and can be used to update observers and prevent our shooter from timing out. We have found that the cRIO disables outputs mostly due to network problems, but this happens fairly often for short enough periods that the driver’s don’t notice. This is a problem because the software can’t tell if something like our shooter is stuck and needs to be killed or the motors just aren’t on. That was an especially big issue on the shooter where we have a timeout for loading to avoid burning up the motors if it gets stuck. The 4 analog hall effects are on the drivetrain gearboxes so we can see where the dogs are so we can automate our shifting nicely (2 each because the range of 1 wasn’t large enough). The battery voltage measurer is also used for that so we can open-loop speed match both sides of the gearboxes accurately, and correlate problems with the robot with the battery voltage. The potentiometer is for switching between auto modes.

The claws are driven completely independently. Each one has one CIM, and encoder, and 3 hall effect sensors for zeroing. The 3 sensors are at both limits and another one in the middle so we can accurately zero quickly. If you watch videos, both claws twitch right after auto as the robot rezeros. We choose to rezero at the beginning of teleop so that any manual zeroing problems from auto mode won’t last through teleop. When the robot is being set up, we look for edges on the hall effect sensors to calibrate the robot. Austin always moves the claws past one of the edges while setting the robot up before the match. There’s a lot of cool math and code behind making them quickly go where we want and not drop the ball. James is going to do a post explaining more of that.

4 Likes

Brian covered the more electrical side of things; because of the question about our control loops, I’ll explain a few of the general ideas here, but I have to go off to a concert soon, so I don’t have much time to get into the details.
Basically, for all of our control loops, we use a state-space representation of our various systems (since motors are so very conveniently linear and time-invariant) using feedback from our encoders. This relies on an equation:
dx/dt = Ax + Bu
Where x is the current state of the motor/system (angle and angular velocity), u is the voltage(s) applied to the motor (we have multiple motors in the drivetrain and claw). A and B are constants which have to do with the physical characteristics of the system. This entire system can be discretized into:
x[k + 1] = A_discrete * x[k] + B_discrete * u[k]
Using this state-space representation and using the a controller where u = K(R - x) where R is the goal state and K is a constant matrix which we can adjust to tune the poles of the system. There is also an observer for taking in sensor feedback.

In the claw, by carefully choosing a u such that the two inputs control separation of the two claw halves and position of the bottom half separately (so u isn’t actually the two voltages applied to the motors… it is just a simple transform away :slight_smile: ), we can then place certain constraints on K such that certain values in the matrix must be zero. This allows us to control separation of the claws and position of the bottom half of the claw independently.

In order to deal with the caps on voltage (we only have plus or minus 12 volts available to us in a match; realistically, less), we create constraints in a 2-dimensional space where the two dimensions are the top and bottom claw voltages. We then transform these (linear) constraints into a space where we have separation and position of the claw as the two dimensions. If our controller is asking for voltages outside of the realistic range, we then find the best point in this space to prioritize reducing separation error such that, if the claws open or close a bit, they quickly go back to normal and don’t drop the ball. This is also used in our drivetrain to ensure constant-radius turns. More details will be forthcoming. That was the 15 minutes of writing explanation.
Edit: I wasn’t very clear, but everything in here is a matrix.

2 Likes

What are your typical control loop rates, and what states are you generally tracking in your formulation (position/velocity/acceleration?). In the case of the higher order derivatives, are you filtering the measurements at all (in your observers or otherwise)?

I understand state space control, but have found that getting a smooth velocity/acceleration signal to be one of the “dark arts” when I have played with it in the past.

We do a couple of things that are unique to 971, at least among the teams that we interact with frequently. We only use encoders, because we can get significantly lower noise and more accurate signals from encoders. The noise in a potentiometer and the noise in the ADC is comparable to the angle that we adjust the claw for shots. If you work out the math, moving the ball down by 4”, 20 feet back is 0.015 radians. That isn’t very much when you move your claw close to 5.5 radians. We can also pull out a cleaner velocity estimate from an encoder, and use that to tighten our loops up. Using encoders does mean that we don’t have an absolute sensor, so we don’t know what ‘zero’ is until we go through a homing sequence. We choose to use hall effect sensors for this purpose, and to register interrupt routines which capture the encoder value on the rising edge of the hall effect sensor. We were seeing somewhere about 0.001 radians of repeatability this year using hall effect sensors. We use the same trick for calibrating the pusher in our shooter.

After 2011, we haven’t had a single mechanical sensor on our robot since. We only use optical sensors, sonar sensors, hall effect sensors, encoders, and potentiometers. We were finding that the sensors were being triggered by vibration, and that was causing us lots of problems.

Every match, we log somewhere around ~100 MB of data on the BBB. This lets us do lots of post match failure analysis. If our driver tells us that when he did X, 35 seconds into the match, or right at the end of the match, or …, the robot did Y instead of what it should have, we can go back in the logs and figure out exactly why it was misbehaving. This also means that most times when the robot misbehaves, we only have to observe the failure once before we have a pretty good idea about what happened and can deploy a fix to prevent it from happening again. This has saved us many times.

3 Likes

We run our control loops at 100 hz , and our data comes in at 500 hz. I was going to run that through a kalman filter at that rate, but nothing really needs anything that overkill, and I never got around to implementing it.

Our drivetrain is 1 control loop. Our states are [left position; left velocity; right position; right velocity].
Our shooter is 3 states. [observed voltage, position, velocity]. Our shooter knows if the spring is attached to the pusher or not, and gain schedules the shooter model and gains accordingly. The origin of the shooter is internally set to the virtual 0 length position of the spring. I really dislike implementing integral control and dealing with integral windup, so I like to implement things using what my professor called Delta U control. The trick is to add an integrator to the plant (so that you model it as taking in changes in power), and then have the observer estimate the actual power being applied. This means that when the system is responding like it is supposed to, the integral term doesn’t wind up.
Our claw is 4 states and 1 control loop. [bottom position, bottom velocity, separation position, separation velocity]. This lets us control what we actually care about.

I have found that to get clean velocity estimates, you need to slow down the observer and trust the model more. An observer with really slow poles decays error in the estimate very slowly. An observer with really fast poles decays error in the estimates very quickly. The end result is that noise in the input signal gets amplified when passed through an observer with fast poles, and filtered when passed through an observer with slow poles. If you formulate the problem with a kalman filter instead and look at the gain matrix that the kalman filter is using, you can see how increasing the noise in the input signals will slow down the poles.

Just off the bat, immediately after watching your release video the Mammoth became my favorite robot this season.

With the performance to boot. As a control engineer, it is very interesting to see all the cool things you guys do.

I did have a couple of questions of my own I am hoping someone on 971 can answer.

I think the most obvious first question is:

  1. What are some of the main reasons your team choose to use the Begalebone to read through all the sensors vs using the cRIO?

  2. Does the cRIO do any processing? If so, what? (aside from the compressor)

  3. How do you handle graceful shut down of the bone? Or do you not care? In particular, do you take any measures to protect the filesystem on an unplanned shutdown because you are writing to the filesystem as well (I assume this because it was mentioned the bone writes a log file)?

  4. What is the communication protocol used between the beaglebone and the cRIO?

  5. Do you have vision processing on the beaglebone as well?

  6. Do I understand correctly that all of your PID loops are on the beaglebone and not run on the cRIO?

  7. What language do you run on the bone? what language do you run on the cRIO?

  8. What linux distro are you running on the bone?

We used a beaglebone white this year for our vision processing only. All other sensor data (3 encoders, 1 pot, 2 limit switches, and 2 analog IR sensors were all connected to the cRIO.) This gave us 20fps with 100ms lag, well within our requirements of vision detection.

We have a custom fault tolerant TCP link between the bone and cRIO such that if the link ever goes down, it is displayed on our custom dashboard, and the robot continues to operate without a camera. If the link comes up, the server and clients reset, and comms are re-established automatically.

This system worked flawlessly through our 3 in-season competitions. We had problem at our first off-season event event 2 weekends ago. The filesystem on the SD card started to become corrupt and would crash upon startup. This is because we mount the filesystem r/w and do nothing to prevent ungracefull shutdowns. The quick fix was buy a new SD card, and put a clean img on there and then it worked like a champ.

I have planned countermeasures to prevent this failure from happening in the future, I would just like to know your experience with these devices as well.

Thanks in advanced,
Kevin

We do a lot with edge capturing. When a hall effect sensor triggers, we want to know the encoder value at the edge. We use this for zeroing our claws and shooter, used it to locate the frisbees in our helix, etc, etc. This is a perfect application for an interrupt. Unfortunately, if you try to trigger an interrupt on the cRIO, you will randomly reboot it. We tried that in 2012, and spent multiple matches dead (or rebooting in a loop) on the field. We had our problems escalated with FIRST, and were pretty much ignored. We decided that we would never do that again, and brought everything back under our own control.

We ran a FitPC with a custom USB board for sensors in 2012 and 2013, and had problems with the PC and the USB link. We took a leap this year and chose to use a BBB instead with a serial interlink with a custom cape to handle all the encoder decoding. Kernel context switches are really expensive and wouldn’t have let us decode any reasonable number of counts/sec.

It does nothing other than listen to the BBB for motor control packets and run the compressor.

We haven’t had any corruption. Modern file systems are quite reliable and have good journaling.

UDP packet with all the motor values. If I remember right, the protocol is pretty flexible and essentially says ‘talon on port 1 -> 50%, spike on port 5 -> fwd, solenoid 3 -> on’ in a list. UDP is the right answer here since we don’t want stale values to arrive, and would rather drop packets than get them too late.

Nope. We are running it at like 80% CPU utilization right now without vision.

PID! We have been running full state feedback since 2012, and haven’t looked back. All of our logic runs on the BBB. We updated cRIO code once or twice this year when a new WPILib update came out.

C++ on both.

Debian Wheezy.

We have at least 2 partitions on the BBB. We put all our logs on one partition, and the root filesystem on another. We try to not write to the root partition so there is little to no risk of corruption. I’d recommend putting /var/ and where ever you put your logs on a separate partition from your executables. Brian can chime in on a bit more of the particulars of our current setup.

We have had a bit of trouble getting the BBB to come up reliably. It seems like the NIC doesn’t always boot reliably, and sometimes the processor doesn’t boot either. Every match when I boot the robot up on the field, I watch all the light sequences and make sure that the NIC lights are flashing and the CPU lights are flashing correctly. I’ve had to do a robot reboot once or twice on the field to fix it.

If you have any hard controls questions, those would also be fun to answer. I think this robot has loops more sophisticated than ones I have written for work.

1 Like

We typically start writing control loops and state machines before the hardware is done. We do this by doing test driven development. Since all our state feedback loops require a model, we hook up the code that will actually run on the robot (no modifications required) to the model, and add in a bunch of verification to check for collisions. We then run a bunch of tests to verify that, for example, our shooter will shoot and reload correctly, our claws will zero without intersecting from a bunch of different positions, etc. Only once our code passes a bunch of tests do we put it on the real hardware and let it rip. This means that we don’t break our nice new hardware during robot bring up, and we have a lot more confidence in the code that we write. It also means that we can write more complicated code and quickly get it to work.

Thanks for the responses.

I was trying to keep this conversation at the highschool level, but I am intrigued by your response and here are some of the control centric questions I have. Hopefully the responses will help someone else as well.

Your approach obviously works for you and it sounds pretty awesome. But I am curious, could you not achieve the same effect using an absolute encoder, or an incremental encoder with a z (index) signal for homing? This should also allow you to get rid of your hall effect sensors.

This is interesting to me. If I recall correctly, the cheezy poofs also use FSFB for their drivetrain control (or at least that is what I remember from their 2013 code release). I’d like to preface that I have used state feed back controllers a lot in practice, but never in a real application for various reasons which is where my line of questioning comes from.

  1. State feedback out of the box does not change the type of system. As a result, it typically is only useful for regulator type systems that do not need to track their inputs unless the algorithm is modified. I assume your control loops need to track a step response or something similar so how do you modify the state feedback controller to have a near zero steady state error? I am familiar with either modifying the SFB controller with an integrator, or using the nbar function to scale the reference input. What do you do to overcome this?

  2. When dealing with a full state feedback controller you must provide all of the states. Typically for robotics the states are position, velocity, and or acceleration. Unless you use accelerometers, you are almost always guaranteeing the need to have an estimator apart of your control loop to provide the unmeasured state. This adds uncertainty. What type of estimator do you typically use, and how much error does it add to your control loops? This is the reason I typically use output feedback instead of state feedback because I can almost always guarantee that I can measure the output and don’t have use an estimator.

  3. I assume you are designing your State Feedback Controllers on a LTI plant model of your system. Have you noticed that the range of operation you needed out of your system remains in that particular linear range of your plant as described in the LTI model? We use Matlab simmechanics and import solidwork models of our robot systems with inertia properties directly into simulink to design our controllers. The controllers imported are always non-linear (mainly due to friction models) and so we try to cover these cases with gain scheduling, or other control mechanisms to cover most of the non linear system. This method helps us see the non-linearity and try to account for it in our controllers. How do you ensure your controller designed on an LTI plant model, can scale to the non-linear system of your bot? or Is the LTI controller good enough that you don’t care.

  4. I assume you use Matlab/simulink to perform pole placement, but what exactly do you code? I assume you transform the u = Kx +r equation into a difference equation a nd calculate the matrices each control loop. Do you program more than this? What do you do to control the output of the controller for our application (saturate to +1 or -1 for driving motor controllers for example). We run a custom PID controller written in Java which takes care of integral windup, derivative noise filtering, and allows for gain scheduling, feedforward term, and output scaling to drive out motor controllers.

I typically use PID for most things, although I have run LQR and MPC controllers in practice. It’s pretty cool to see you guys run SFB controllers for your bot, although PID is just a special case of the state space controller. Do you plan to share your code eventually, or possibly provide excerpts of your control design (i.e state space models, or difference equations)?

The problem we noticed wasn’t with the filesystem itself, but rather the life of the SD card, and having bad sectors on the flash drive. Enough writes, and improper shut downs actually damaged our SD card sectors, where journaling could not repair. The card only lasted about a month and a half with moderate use. I replace the card with a backup image to get the system back to normal. I was wondering if you had any similar experience, but it doesn’t sound so. We plan to make our file system read only in the future, and then write data to a different partition. I can’t imagine that is a solution for you. It looks like you need the file system to be read/write so you can write to the file that holds your gpio pins on the bone correct? Is that not on the Root file system?

Thanks,
Kevin

I am not personally that familiar with the use of index signals on absolute encoders, but several potential issues could arise:
-The majority (or all?) of indexed encoders will give you one or more index signal(s) per rotation. Most of our encoders move more than a single rotation through their full range of motion. This means that we would only know for sure that we were in one of a few possible places.
-With the hall effect sensors, it is easy to put sensors near or at the limits of the appendage or device. This means that, when zeroing, we are guaranteed not to run into a hard stop.
-If, for whatever reason, the encoder were to slip, then we would have to re-do the zeroing calibration. If we did not notice this before a match, we could easily spend a whole match missing shots by slight amounts or dropping the ball or the such.

If I understand your question correctly, then you are asking how we deal with situations where it may be necessary to apply a non-zero voltage to the motors in order to attain the goal state (I’m afraid that my vocabulary in this area is a bit behind my knowledge of how to make our systems work). The way we deal with this is the “Delta U” controller that Austin mentioned earlier; we add the voltage of the motors as the state and make the change in the voltage an input. This allows us to deal with situations where we need a non-zero steady-state voltage.

Well, the actual states in our state matrix are just position and velocity (these are all that are necessary for dealing with an ideal motor). We run an observer such that:
If y = Cx + Du = the output of the system (the encoder readings), and x is [position],[velocity]], then C is [1, 0]] (and D is zero). This allows us to set up an estimate of x,
x_hat = Ax_hat + Bu + L * (y - y_hat)
x_hat = Ax_hat + Bu + L * (y - Cx_hat - Du)
Nothing too unusual; a reasonably standard state observer. By placing the poles on A - LC, we can control the aggressiveness of the observer.

Assuming LTI is good enough for our applications (on our shooter this year, we gain-scheduled it to use a different controller if it was pushing the springs or not, but in either state, assuming LTI was sufficient).

We’ve been using python for pole-placement (using Slycot).
We generally get a voltage out of our controller (as mentioned earlier, this is not always in u), and if the voltage the controller wants to apply is too large, we saturate the motor controllers. Scaling for the actual PWM outputs is dealt with separately (because Talons are conveniently linear, this is just a matter of scaling; pre-2013, with victors, we had fitted functions to deal with the non-linearity). We do not currently add a feedforward value to u in our implementations; using the aforementioned “Delta U” controller deals with this in cases where we might need it (such as when pulling the springs on our shooter back).

Once our website comes back up, you can find our released code from just before this season started (it should come up just by searching “released code” or the such). We will probably be releasing more documentation regarding our controls at some point, but that is contingent on someone going to the trouble of writing it all up and then editing it to make sure that it is well enough written for release. Hopefully we get something out by the end of the summer, but don’t hold your breath.

I’m not really the person to talk to about this (Austin or Brian can tell you more). I never recall corruption of the SD card causing us issues when driving the robot (and certainly never in any matches), but at one point at Worlds, we noticed some of our logs had gotten corrupted so we replaced the BeagleBone (and SD card) in the robot anyways, just to be safe.

Hopefully I’ve answered your questions; if I missed the point of any of your questions or if you have any more, feel free to ask and one of us should respond.

This sounds really cool, and it is very interesting to me, but due to the small issue of me still being in high school, I am very confused on many things. I tried wikipedia, but its not being of a whole lot of help. I’m not attacking you for being too technical, I just would like to understand. So, heres some really dumb questions.

What is state control, and how does it replace PID? The wiki said that it involves a thing with a small number of states. I don’t understand how “position” and its derivatives can be “states”. It seems to me that each one of those could have near infinite states.

I don’t understand network protocols, could you explain in further detail, how that works between cRio and BBB?

The digital hall effect sensors, for how long do they output “Seen a magnet”?; I suppose I’m asking their sensitivity.

Does your assumption of an ideal motor cause problems? Also, can I get an explanation of that math in more layman’s terms? I haven’t taken calc yet, but I have a basic idea of what like integrals and derivatives are.

Thank you so much! Or if its too much trouble, let me know, and I’ll do some more research.

Funny that you should say that. One of my high school kids already started responding to you before I got home today :slight_smile: I prefer on 971 to drag the students up rather than the design or math down. I’m always amazed by what they can learn.

For everyone else, we’ll happily try to give you some pointers about how all this works and how we do it. Keep asking questions and we’ll keep trying to answer. I’ve taught FSFB to enough students (James, for example, though after the basics, he kept learning on his own) that I’ve figured out a reasonably good way to teach it.

James partially addressed this. I really really really don’t like integral control. It is a significant amount of work to do right. For most, if not all, of our systems, I try it first without integral control and see if I can make the controller stiff enough to keep the error within acceptable bounds. Turns out that was good enough for our drivetrain and claw this year. This year, James did the modeling for all of our systems, and I tuned the final constants. He did a great job.

I’ve done integral control multiple ways with FSFB. My favorite is to add an integrator to the input to the plant, and then observe the applied voltage in the observer. This unfortunately puts one of the poles for the controller in the observer. On the other hand, if the system is responding exactly as expected, the commanded voltage will be the same as the observed voltage, and you won’t get any integral windup. I find that it works really well. One of my professors in a MPC class called this trick Delta U control. The other classic trick is to integrate the state that you want to have zero steady state error, and stabilize that. You can model that into your plant. That has windup problems with a step input, which I dislike.

The states we always use are position and velocity. You are correct that an estimator is needed. If tuned improperly, it can add a significant amount of noise into the velocity estimate. If tuned correctly, the estimate is quite good and gives us lots of freedom in what we do. Our observer poles this year were something like 0.05±0.01j for our claw (dt = 0.01s). That’s pretty fast. It helps that our sensors are very carefully designed into the system and we work hard to keep the backlash/non-lti behavior out of the system.

State feedback gives us enough knobs to make our controllers aggressive enough and robust enough to noise that this hasn’t been an issue. I’m sure we could do better if we spent more time on it, but honestly it has been good enough that it hasn’t been worth messing with. As James said, we gain schedule where it makes the most sense or we actually notice a problem. We gain scheduled the number of frisbees in our indexer in 2013. Controllers are remarkably robust too. I think one of the reasons that we are able to run controllers that are as aggressive as we do is because the observer is so good at filtering out the noise from our sensors and keeping our velocity estimates clean and undelayed.

SI units all the way. We convert everything to radians, seconds, meters, volts, amps, … We output voltage from our controllers and then convert that to ± 1 and then saturate that. Starting this year, we have been using set theory from MPCs to deal with saturation in systems with more than 1 input like our claw.

The controller is of the form U = K(R - X).
z X = (A - BK) X + BK R

The place function (we re-implemented it in python) takes the A matrix, B matrix and the desired poles, and places the eigenvalues of A-BK where we want them.

I’m probably not orthodox here, but I pretty much lump LQR and FSFB in the same bucket. They are 2 different ways of placing your poles, with different problems and insights. We ran LQR on our DT this year because we couldn’t figure out how to get python to place the left and right poles in the same spot with direct pole placement. I tend to tune with LQR and then print the poles out to see if they are where I want them to be.

On top of the code that we have released, every now and then, some of our controls code gets ported to Java and shows up on a local blue robot. 254 has been running our flywheel controller from 2012 since 2012 on all their bots. The awesome part was that all they had to do this year was to get a step response, re-tune the model, and it worked perfectly. I don’t think they ran our drivetrain controller in 2013, but I could be wrong. I wrote the first FSFB drivetrain controller with one of 254’s students in 2011.

We do all of our controls design first in Python where the cost of iteration is cheap, and then port that to C++. If all you want to understand is our controls, start by reading the Python. Take the time to read and understand our claw controller. It is my favorite loop of all of them.

I’m not sure what our release plan is, but since you’ve expressed interest, I’ll see if we can release earlier rather than wait like we have in the past.

We must have gotten lucky. Sorry to hear that you had troubles.

GPIO is done by sending all that information to the BBB from a serial connection to our cape. Regardless, GPIO is done through /sys/ which is a sysfs filesystem. Kernel context switches are killer on a system like this, so we work hard to minimize them.

Hope that helps.

I’m a big fan of bringing people’s knowledge up, rather than dumbing what we do down. We on 971 prefer to teach students how things really work rather than simplify them so they are simple enough that they are easy to learn.

I like to think about the state as the minimum set of variables used to describe what the plant (thing we are controlling, comes from Chemical Plant) is doing. For something like a simple DC motor connected to a flywheel, this is the position and velocity of the flywheel. For something like our drivetrain this year, that would be the distance traveled and velocity of the left and right wheels.

Since robots work in discrete time, I like to do all my controls math in discrete time. I think that makes it easier to explain.

Lets define a simple system as follows.

x(n + 1) = a x(n) + u(n)

Lets let x(0) = 1, and u(n) = 0, and look at how the system responds.

x(1) = a * x(0)
x(2) = a * x(1) = a * a * x(0)
x(3) = a^3 x(0)
x(n) = a^n x(0)

We can notice something interesting here. If |a| < 1, the system converges to 0 and is stable.

What if a = 2, and we can define u(n) = f(x(n))? For a CCDE (constant coefficient difference equation), LTI (defined below) means that the coefficients are constant. Lets let u(n) = -k * x(n)

x(n + 1) = a x(n) - k * x(n) = (a - k) x(n)

Given our knowledge above, we can compute the set of all k’s that make our system stable. k is in (1, 3).

Since life is always more fun when linear algebra shows up, lets let X be a matrix instead of just a value.

X(n + 1) = A * X(n) + B * U(n)

If we do the same trick as for the scalar above, we get the same result. This means that we care that A^n decays to 0 as n -> inf. If we diagonalize the matrix, we can rewrite it a (P^-1 D P)^n -> P^-1 D^n P. Since D is diagonal, D^n is just the diagonal terms ^n. This means that our system is stable if all the elements on the diagonal have a magnitude < 1. Turns out these values have a name! They are the eigenvalues of A. Therefore, we can say that if the eigenvalues of A are inside the unit circle, the system is stable.

Lets try designing a controller.

U(n) = K * (R(n) - X(n))

R(n) is our goal.

X(n + 1) = A X(n) + B K (R(n) - X(n))
X(n + 1) = (A - BK) X(n) + B K R(n)

So, we can use fun math to design a K such that the eigenvalues of A - BK are where we want them, and the system responds like we want it to. This is pretty awesome. We can finally model what our control loop is doing.

Unfortunately, as Kevin was talking about above, this assumes that we know the state of our system. X(n) is the state of our system at t=n timesteps.

Ah, but you say, we can determine the velocity by taking the change in position over the last cycle, and divide that by the time! Unfortunately, that will instead compute the average velocity over the last cycle, not the current velocity. When you are moving really fast, that delay actually is a big deal. We can define another controller who’s whole job is to estimate the internal state.

Let me introduce another equation and variable. Let Y be the measurable output.

Y(n) = C X(n) + D U(n)

For most robot systems, D is 0, but I like to include it for completeness.

Lets define Xhat to be our estimate of the state.

Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - Yhat(n))
Xhat(n + 1) = A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n))

Lets try to prove that the observer converges, i.e. X(n) - Xhat(n) -> 0.

X(n + 1) - Xhat(n + 1) = A X(n) + B U(n) - (A Xhat(n) + B U(n) + L(Y(n) - C Xhat(n) - D U(n)))
E(n + 1) = A (X(n) - Xhat(n)) + L C Xhat(n) + L D U(n) - L C X(n) - L D U(n)
E(n + 1) = A (X(n) - Xhat(n)) - L C (X(n) - Xhat(n))
E(n + 1) = (A - LC) E(n)

Yay! This means that if the eigenvalues of A - LC are inside the unit circle, our observer converges on the actual internal system state. In practice, what this means is that we have another knob to tune. You can think of this as 2 steps. Xhat(n + 1) = A Xhat(n) + B U(n) is our predict step. Notice that this uses our model, and updates the estimate given the current applied power. L (Y(n) - Yhat(n)) is our correct step. It corrects for an error between our estimate and what we just measured. If we set the eigenvalues of A - LC to be really fast (close to 0), we are trusting our measurement to be noise free, and if we set them to be slow, we are trusting our model to be accurate.

If you have a nth order CCDE, or nth order differential equation, you can instead write it as N coupled first order equations. This property lets us express an arbitrarily complex system in the form shown above. I also like to design all our systems in continuous time and then use a function that I wrote to convert the nice clean continuous time model to the discrete time version used above. Robots compute things at discrete times, and it isn’t quite right to pretend that everything is really continuous and to use the continuous equations instead.

You are going to have to do a bit more research, since it does take a couple college courses to cover all of this stuff properly, but I’ll happily give you enough info to help make the information that you will find out there make more sense. Controls uses a lot of calculus.

For the most part, it doesn’t cause many problems. Most of the non-ideal behavior on a motor in FRC is no longer worth worrying about. With the old Victors, 50% power would result in close to full speed, and that caused me all sorts of problems.

Controls guys love to throw around the term Linear Time Invariant (LTI). LTI means that the system is “linear” and “time invariant”. Time invariant is easy to understand. If I apply u(t) to my system starting at t=1, I get the same response as if I apply u(t) to my system starting at t=2, all assuming that the system is in the same initial state at the start time. Linear means that the following holds. F(2 * u(t)) = 2 * F(u(t)), and F(u1(t) + u2(t)) = F(u1(t)) + F(u2(t)). In practice, this means that your system can be defined as a set of ordinary differential equations.

When I design a control system, I try very very hard to avoid doing non-LTI things. I don’t like doing things like writing U(t) = x^2, since that isn’t linear. (For fun, use my definitions above to show that that isn’t LTI :)) I don’t like dealing with integral windup, since it is very hard to properly model the solutions, since they aren’t LTI.

For a DC motor, I like to use the following definition. My Kt and Kv may be inverted from what people typically use, but the math still works… You can think of a DC motor as a generator in series with a resistor. The torque is generated by the magnetic field in the generator, which is proportional to the current through the coils. The BEMF voltage is the voltage that the generator is generating. Lets put some math to this.

V is the voltage applied to the motor. I is the current that goes through the motor. w is the angular velocity of the motor. J is the moment of inertia of the motor and it’s load. dw/dt is the derivative (rate of change) of the angular velocity of the motor, which is the angular acceleration of the motor.

V = I * R + Kv * w
torque = Kt * I

V = torque / Kt * R + Kv * w
torque = J * dw/dt

dw/dt = (V - Kv * w) * Kt / (R * J)

You probably won’t recognize this, but this is a classic exponential decay. If we are modeling the system as only having velocity and ignoring position, it has 1 derivative, and is therefore a first order system. This means that it has 1 state.

Depending on how critical the response is, we’ll either pull the mass/moment of inertia out of CAD/guess, or we’ll command the real robot to follow a step input (t > 0 ? 12.0 : 0.0) and record the response. We then pass that same input into our model, and plot that next to the real robot response. We then tweak the coefficients until the responses match pretty well.

I like to bring up a new system by limiting the applied power to something that I can overpower with my hands (± 2 volts was our claw limit this year), and then I hold on and feel the response. This makes it easy to check for sign errors and to feel if the response is smooth and stiff or jittery without the robot thrashing or crashing itself. After it passes that check, I’ll let it run free and slowly give it more and more power. Most of 971’s systems can crash themselves faster than we could disable them, and even if we could disable one, it would still coast into the limit pretty hard.

I’d recommend that you pull down our 2013 code (or 254’s code) and look at the python used to design the control loops. I’d recommend taking a look at the indexer code from 971’s last year’s code as a simple state feedback loop. Play with it and learn how it works. Then, try modeling something that was on your robot this year, and stabilize that.

1 Like

I’ll try to help explain some of the underlying principles around control systems, so that you can better understand the conversation, and the research you come up with. I spent last year teaching Undergraduate Feedback Controls at NYU, so being in high school doesn’t mean you have any less ability to learn this stuff.

Before we jump into this stuff, let’s get some terminology down. To aid in your understanding, it is helpful to understand what we are talking about and why. When a control engineer sets off to design a controller for a new system, first thing they do is create a model of the system. Similar to the way mechanical designers create CAD models of physical items to have a virtual representation of an object that they can test, manipulate, etc. control guys do something very similar. It would be very expensive and time consuming to try and design a controller and continue to test it on the actual hardware, sometimes the hardware is not built or ready yet. The models we create are called dynamic models, and they use the laws of physics to mathematically represent the system we are trying to control. We refer to this model as the “plant” model. The plant is the system we are trying to control. This can be a DC Motor and Gearbox, it can be a robotic manipulator, it can be a circuit, or even in the medical field, it can be the reaction time of a pill. The model accounts for as much of the dynamics as possible of the real physical system and we can use this model to then test various types of control algorithms before we decided the best design.

For Robotics, we are mainly interested in modeling and controlling dynamical systems. The term dynamical can be thought to mean a system which does not change instantaneously when acted on. For example, a ball (the system) when kicked (the action) doesn’t instantaneously move to the new location, it takes some time to move. The time it takes, how it moves (through the air, rolling on the ground, etc.) and the distance it moves is all considered the dynamics.

When we mathematically model a “plant” to create the “plant model”, we mathematically describe the motion of that system. Lets work with an example to bring these ideas to life. Let’s say we want to control the position of a projectile. A simple model of a projectile would be that of a projection in two dimensions. If you remember from physics, the x and y distance of a projectile in 2 dimensions can be described as (assuming acceleration is constant):

X_final = x_inititial + v_initialtcos(theta)
Y_final = y_initial + v_initialtsin(theta) - ½ gt^2

Where theta is the launch angle, x_initial is the initial x displacement, y_initial is the initial y displacement, and V_initial is the initial velocity.

These equations tell you everything about the motion of the projectile (neglecting air resistance). If you know the following parameters, such as starting x, y, theta, and velocity at time = 0, you can determine the position of the projectile for any future time t>0.

In order to solve the above, we need to solve both of the equations simultaneously. The above is called a system of equations and all dynamical systems can be modeled using a set of equations like the above.

This leads us to the definition of “state” in control systems. The “state” of a dynamical system is the collection of all variables that completely characterize the motion of the system to predict future motion. This is very important in control systems because what is means is, what you know the state of a system, you can completely describe all of its behavior, and if you want to control it, understanding how it behaves is a must. Every model, there is a set of variables, which once you know all of it, you can completely mathematically predict the behavior of the system. These variables have various categories, inputs (which you use to drive the system), initial conditions(which determine the resting state of the system at t=0), outputs(response of the system based on the system dyanmics).

From our example, the position of the ball (x, and y location) and the velocity of the ball, is all that we need to plug into the dynamic equations to solve it. In this example, position, and velocity are called the state variables, because thats all we need. (Theta is apart of velocity because velocity is a vector which has both magnitue and direction. The magnitue is the velocity number in m/s, the direction is theta.)

For our example, you can assume velocity is an input, the initial x, and y positions are initial conditions, and the final X and Y locations are the outputs.

Depending on what you are trying to control or solve, the state variables can be different.

When we say “state space” that means the set of all possible states for that system. And when we use the term “state space representation” that is saying, lets re-write the dynamics model in a special form as a set of inputs, outputs, and state space variables.

A common class of dynamical systems modeling are ordinary differential equations (ODE). In simplest terms, an differential equation is any equation which contains it’s own derivative. The term ordinary mean it only contains one independent variable (position for example). (This is as opposed to partial differential equations which can contain more than one independant variable.).

You probably are already using ODE’s in physics you just don’t know it. If you remember from physics, the derivative of position, is velocity, and the derivative of velocity is acceleration.

If we call position x, then velocity is the 1st derivative of x, and we can re-write velocity as x_dot (where the dot means 1st derivative, and is a short hand notation to make writing faster, similarly, the first derivative of velocity is acceleration and we can write that accel as v_dot. But we can go even one step further. We realize acceleration is the first derivative of velocity, which in turn is just the first derivative of position. This means acceleration is the second derivative of position, and we can write acceleration as x_dotdot. (where double dot means second derivative. You can keep going with 3 dots, 4 dots, etc to mean 3rd, 4th derivative and so on if you need to).

Using this information, we can we-write the equations of projectile motion as ODEs. With g=9.8m/s^2

X_final = x_inititial + x_initial_dottcos(theta)
Y_final = y_initial + x_initial_dottsin(theta) - 4.5t^2

So as you can see, we use ODE’s to model dynamical systems. The state represents everything we need to know in order to predict how that system will behave, and now we write these equations in a special form called “state space representation” and all this allows us to do is make it easier to work with these equations when solving them simultaneously. Putting them in “state space representation” allows us to re-write these equations in matrix from. There is a part of math called Linear Algebra which introduces matricies, and matricies allow solving system of equations where you need to solve equations simultaneously much easier. Thans why control guys use it. Austin did a good job explaining how to do this (by writing each equation as a 1st order ODE and then putting it in matrix form).

Once in state space form your model is done and you can start using it to test different type of control algorithms on it. There is a special case of dynamic models called LTI. This is linear-time invariant models. Most systems can be approximated by making assumptions to model the system as an LTI system. For example to get the projectile motion equations above we assumed acceleration is constant, and ignored air drag. There are always assumptions you can make to simplify the model. The reason we like to simplify the model is because it makes the math easier to solve and we can get to a solution much faster. The downside is, as you make these assumptions your mathematical model doesn’t truly represent the physical system. There is a medium where you can make a “good enough” model where you make the right assumptions to ease calculations, but don’t go overboard and not capture all of the dynamics of the system. That is the true art of dynamic system modeling.

The term LTI really just means this: If I have a system and just look at it from an input/output perspective (I put a signal in, and observe the signal out). The system must have two properties:

The first, The input output relationship must not be dependent on time (time invariant). This means if I put a signal into a system, and get one output, I must be able to come back tomorrow, the next day or 10 years from now, put that exact signal in, and get the same exact output. If there is ever another time, where the signal is different, then the system is not time invariant.

The second, if I have two different inputs, A and B, and add them together to create a new input C that is the summation of both old ones. Then the output of the system of this new signal C must be such that it is the summation of the individual outputs of A and B had I ran those signals through the system independently. This is called the superposition property.

If a system has these two properties, it is called an LTI system and a world of cool techniques can be used to design controls for them. If not, then another harder unforgiving world of non-linear techniques needs to be used. So most of the time we try to make the model LTI, and try to understand when the system doesn’t fit the LTI model because it makes life easy.

This brings me to control systems as a whole. The control system is another system that one uses to tame the plant and get it to operate the way one would like automatically. Basically we create a system that understands the behavior of our plant, and we use this system to provide the proper input to our plant to make it do what we want. Having a DC motor maintain a constant speed for example. The “control system” itself is just another algorithm, but instead we design the dynamics of it, to take an input, and use the output to drive the plant to where we want it. When we design the dynamics of a controller, The controller is a particular algorithm with paramters we can tweak to change its behavior. When we tweak these parameters thats when we say we are “tuning the gains” of the controller. As we modify these paramters of the controller, we are changing its dynamics to control our plant the way we want. For state space you tune the paramters by using pole placement, for PID the paramters you tune are the proportional, intergral, and derivative gains. Other controllers have other paramaters you tune to achive to modify the control system dynamics and get the proper response out of the plant model.

There are many different types of control algorithms, each have their pros and cons.A few control algorithms are PID, state feedback, LQR, lead-lag, H_infinity, MPC, bang-gang, take-back half…etc. The list goes on. Each of these systems has different ways of changing its dynamics, so that its output can be used to drive the plant in a known way and help you, the control engineer, achieve the desired effect. Most controllers, (lets call them control laws) fall into two categories: state feedback, and output feedback.

State feedback, is when you try to measure all of the states of the plant model and provide it to the controller (position, velocity) etc. And use that information to create a state feedback controller.

Output feedback is when you measure the output of the plant and use only that information. For example the speed of a dc motor using a tachometer, or the position of an arm using a pot. And feed that back to a control to create an output feedback controller.

Each of these methods have pros and cons, for example in State Space you may not always be able to measure all of the states, but as we said before, you need all of the states to determine the motion of the system, so sometimes you need to make an estimator, which estimates a particular state. PID for example you need to calculate the integral every term which introuduces the intergral windup problem, and the derivative term which is very susecptible to noise. So if your sensor has noise in it, like a jumping reading from a sensor, then the control output will jump around too due to the derivative noise and that is not good at all. All of these problems have solutions for both controllers, they just need to be addressed.

In addition some of them scale very well to the non-linear systems, others don’t. The state space controller presented is one which doesn’t expand well to non-linear systems, the PID is an algorithm that does expand a bit easier. It is safe to say that all physical systems are non-linear, but you are trying to keep the system behaving in a linear controllable way. Sometimes this is not possible and you need to venture out into the non-linear control world. I would tend to agree with Austin however, for Robotics, you can probably never have to worry about the non-linear stuff and can keep all controllers linear with linear plant models and achieve desired results.

Then this leads to the type of control you want to achieve, you heard me throw around the term “regulator” type system in a previous post. There are two types of major systems you want to achieve, either a regulator, which is a system that you just want to push to another state, and a tracker system which you want the output to track the input. An example of a regulator system could be the posotion control of a motor. If the motor is at rest and you want it to go to another position and stop, you are trying to push it to another state. An example of a tracker type of system is speed control of a motor, you want the motor to track the input speed you are providing.

After you design the controller and simulate the input/output behavior, you extract the controller out of your model and implement in software, or hardware to drive your physical system. If your model was good enough, then the controller you designed in the simulation world, should behave very closely in the real world, with maybe minor tweaks. Typically when you start programming the controller, it takes on the form of a mathematically algorithm. You can convert it into a difference equation (like a differential equation, but instead of time being continuous, time is discrete, because electronics all work in discrete time) and you can program that into software. Certain controllers you can implement in hardware, PID is famous because it is easy to understand, easy to write in software, and you can even build a hardware implementation using RC circuits. Most other controllers, just implement in an FPGA and your good to go!

Hope this helps,
Kevin

Okay! Thank you so much, guys. I really appreciate all the effort that clearly went into those posts, and I hope it can help many students to come. Definitely got me interested in a more softwarey path in school again.

NotInControl, your post really gives a good overview without me having to worry so much about the math I don’t understand. That was super helpful, I much better understand what I’m doing now.
Austin, rereading your post after NotInControl’s really gave me some awesome insight. It really gives a good overview of the involved complexity in designing a system. I’ll look into some math, and I’m sure that will help my learning process with all this controls stuff.

So, a few more questions, which require an oversimplified explanation of my understanding:
1.You make a mathematical model of your system using all relevant states. It needs to be linear and time invariant or the math is even harder.
2. You use this to test your code which uses the variables you defined to achieve an output. You tune constants here. This is where I have questions. do you create some kind of testing code? Is hardware(sensors, even mock-ups of robot parts with similar moments?) involved?
3.Then you go to the real robot, and hopefully, its close enough to your model, and everything is peachy.

Yes.

I do all my testing purely in software. I have found that if tests aren’t automated and easy to run, they won’t get run. I’m incredibly lazy…

For my example above, lets design a controller and write a test for it. The plant is x(n + 1) = 2 x(n) + u(n), and we are trying to stabilize it to 0.


#include <gtest/gtest.h>
#include <gtest/gtest.h>

class Plant {
 public:
  Plant(double x0) : x_(x0) {}
  void Iterate(double u) {
    x_ = x_ * 2.0 + u;
  }
  double x() const { return x_; }

 private:
  double x_ = 0;
};

double Controller(double x) {
  return -x * 1.5;
}

TEST(Stabilize, StabilizeToZero) {
  Plant p(1);
  for (int i = 0; i < 100; ++i) {
    double u = Controller(p.x());
    p.Iterate(u);
  }
  EXPECT_NEAR(0, p.x(), 0.001);
}

int main(int argc, char **argv) {
  ::testing::InitGoogleTest(&argc, argv);
  return RUN_ALL_TESTS();
}

The Controller function (or class) is then used unmodified on the real robot. The only thing that you need to change is how you measure your sensor value to feed it to the function, and how you get the output to the real robot.

For a more complicated plant, you could imagine simulating other simple sensors. A hall effect sensor on our claw can be simulated as something that turns on when the claw is between 2 angles.

Tests are also great for testing that other simple functions do what you expect them to do. Wrote a nice function to constrain your goal? Write a test to make sure that it handles all the corner cases correctly. Once you write it once and automate it, it lives forever and will continue to verify that your code works as expected.

Yes.

As long as you don’t crank up the gains in your simulation, the gain and phase margin of your loops should be large enough that it will be stable. The gain margin is the amount of gain error (robot responds with more or less torque than expected, for example) that you can tolerate before going unstable. The phase margin is the amount of delay error that you can tolerate before going unstable.

The initial tunings are normally not bad, but I’m a perfectionist. For an extra 1/2 hour of work, the tuning of the loops can be significantly improved over the initial tuning. I watch and listen to the physical response and look for overshoot, slow response, oscillation, etc and tune them out by adjusting the time constants. It also helps to record a step response and plot it. After having tuned enough loops over the years, you get a feeling for what speed poles work for FRC robots, which helps your initial guess. FSFB gives you knobs in both the observer to filter out noise, and the controller to control your response.

Not a problem. Hope it helps.

I am going to tweak your list a little bit but first I want you to understand why LTI is so important. The field of control systems is pretty new. It wasn’t until the 1940’s did major break thoughts in control algorithms start to appear.

The reason LTI models were so important was because back then, they didn’t have dedicated computers to crunch all the numbers and calculate all of the system of equations. The first control systems were mechanical devices, like the first governors for car engines, they didn’t rely on circuits at all. Keeping the dynamics linear and the number of equations low meant people could solve them by hand or use simple calculators. This is why the number of studies and tools done in control systems to date were done based on LTI models, that’s all the majority could work with. Very complicated systems were broken down using multiple simple LTI models.

Today there is a new branch of control systems called modern control systems which venture out into designing more robust, sophisticated controllers because we have new technology at our disposal and computers which can calculate very complicated equations for us. It just turns out that the people in the 40’s to now were very smart, and the tools and principles they developed for LTI systems, can still be used today with high confidence for most applications.

Today we have computers which can solve differential equations for us. And software applications to help us design control systems. Just like CAD people have solidworks to create models, control guys have a software package called Matlab/Simulink which one of its many specialties can be used to design control systems with an add-on called the control system toolbox. This tool box allows you to enter the state space model of your plant, graphs its behavior, add simulated control systems, test then out, and design the controller virtually, before you settle on a design. You use the simulation world to test/tune your controller before you enter the software world and program it.

Once you are happy with the controller in the virtual space, you can do one of two things, you can extract your final control parameters from the model, convert it to difference equations, and code up that controller to then drive your physical system. Or you can use another toolbox in matlab to generate the code for you… but this may lead to more trouble than good if you are not completely comfortable with auto-generated code. A majority of industries use auto-generated code from Matlab or other software packages and run them to belive it or not. I am old school in this fashion and I like to hand-write all of my own control algorithms.

Today we use these tools to develop richer models of plants. For example, for my PhD I am working on an autonomous car controller. The plant model for this system and all of its dynamics is 16 equations that need to be solved simultaneously. The controllers in Airplanes used to automatically take off, land, and fly are orders higher. These models are highly non-linear, and without the help of these software packages and calculators, attempting to hand calculate them would be too difficult for anyone to try.

Using these software tools today allow us to be better control engineers.
There are many different ways you can approach the steps which you design a control system.

What I like my students on my team to do is create a model of the plant and use Matlab/Simulink to test the plant and develop a controller for the system. They use matlab to test/tune the system until I am happy with the response.
After we are done, we take the parameters we settled on in the simulation world and use those parameters to code a software implementation of the controller in java or c++ for example. We run that on our robot and then slowly tweak the parameters in code to further get the “perfect” results.

Those steps above are where control engineers differ. Some like to use tools like Matlab, some like to do hand calculations, some like to write their own calculators. There is a whole field of different ways you can approach control system design. It appears Austin and his team have designed a lot of cool tools to help them achieve the controls they want for their robot and have a very well established programming and control model. The below is a high level view of how I approach control system design for robotics:

  1. Model the System dynamics by hand. As you stated before. (I write out the equations)

  2. Re-create the model in Matlab/Simulink. Use matlab/Simulink to design a controller and test the overall system in a virtual world. This requires no hardware at all, just the software, your knowledge of control systems, and the dynamics of your system.

  3. Program the control system to live on the RIO. I first test the controller code written independently so that I know the algorithm works as intended. (In the end it is just a function or a class which takes in an input and spits out an output used to drive motors or something). For example I test to make sure the controller code outputs what it is supposed to when I give it a particular known input. Once I get the bugs out, I move to the next step.

  4. Test that the sensors on the robot are working and give me the signals I expect and that I am assuming the proper units in my control algorithms.

  5. Run the controller code on the robot, and allow it to control the hardware and keep my hand near the disable button incase things get out of control :slight_smile: This step has to be done carefully, because you are trusting your algorithm, and if its written incorrectly, you can start to break things on the bot.

  6. Assuming things in step 5 are fine, tweak the gains ever so slightly in the code to get a faster response, reduce overshoot, etc. and make the overall performance better.

  7. Done! Sit back and marvel at your creation! Try and win some banners, or bring home some control awards!

Hope this helps, feel free to ask any other questions,
Kevin