paper: Practical Guide to State-space Control

Thread created automatically to discuss a document in CD-Media.

Practical Guide to State-space Control
by: calcmogul

_This book is intended to introduce FRC students to the broader field of control theory. The end goal is teaching students enough such that they can make informed decisions regarding control system design trade-offs.

This book is updated regularly, and the versions uploaded here are low resolution due to Chief Delphi being unable to process them. The latest high resolution version can be downloaded from

state-space-guide.pdf (5.37 MB)
state-space-guide.pdf (5.62 MB)
state-space-guide.pdf (5.72 MB)
state-space-guide.pdf (5.8 MB)
state-space-guide.pdf (5.99 MB)
state-space-guide.pdf (6.2 MB)
state-space-guide-lowres.pdf (13.1 MB)


I figured I’d finally post what I’ve been working on for about a year. This book is intended to introduce FRC students to the broader field of control theory. The end goal is teaching students enough such that they can make informed decisions regarding control system design trade-offs. This book focuses on modern control and state-space controllers because the roboRIO has the computational resources for it and it generalizes nicely to MIMO control systems. Also, model-based feedback controllers can be tuned weeks before the robot goes into the bag (or before the robot is even built). This also allows decoupling electrical/mechanical and software testing. This book covers other fun stuff too like stochastic control theory since it complements the state observer stuff. See the preface for more on why I wrote the book and chapter 0 for notes to the reader.

The book’s source is at and the Python examples are at It isn’t done yet (see the “future improvements” section of the readme), but it might be far enough along to help some people. If nothing else, this book has been a teaching aid for me to teach my students this stuff for the 2019 season. Of course, please let me know if any of the content is inaccurate. Better ways of explaining things are appreciated.

Alpha testing

I recently alpha tested this on my students to get some feedback on it. At the first build session, I spent two hours teaching my students some basic Laplace domain stuff and linear algebra. The linear algebra was basically just following the 3Blue1Brown video series on it. At the second build session, I spent about five minutes answering questions about the previous session, then I spent the remaining two hours teaching them about state-space representation, closed-loop controller/observer design, and various details surrounding that. I briefly walked them through an FRC subsystem derivation toward the end. My students actually read parts of the book between lectures, so that was helpful in making the build sessions more productive. At the third build session, we experimented with the example scripts in the frccontrol package and got matplotlib running on WSL. Now, my students are writing subsystems that use the files generated by the scripts. They opted not to use the C++ example subsystems so they could get experience doing it from scratch. More power to them, I guess.

WPILib support

I’ve been pushing for model-based control in WPILib for a year now, but it’s taking a lot of time to put all the pieces together like documentation and teaching materials (this book), library support (, tutorials… WPILib wants “easy as possible” before we’re locked into an API, and I don’t think it’s quite there yet. Feedback is appreciated. :slight_smile: The main concern people have raised is people getting scared away by linear algebra, So I tried to abstract that away where it made sense. C++ examples are in the WPILib state-space PR in the wpilibcExamples folder. The model-based control API is stable at this point, so writing unit tests and testing it on a real system is the next step. I’m holding off on motion profile stuff for multi-state models until Pathfinderv2’s API is deemed stable.

The Java version will get written after I figure out how to wrap the C++ classes with JNI. A JNI for Eigen like has 30% overhead for 10x10 dense multiplication, so wrapping at the class level sounds better. The examples I wrote only touch vectors for handling references and measurements, so Java might be able to use 1D arrays of doubles instead of Eigen. To support gain scheduling though, I’ll probably have to make gain scheduling setup happen at the coefficient generation stage in Python. The files Python generates will probably have to be below the JNI boundary anyway to avoid wrapping Eigen classes, so putting all that in the same place makes sense.


As a programmer who is currently learning about state space control, I am very grateful for this resource. I will definitely read this paper in the next few days and learn a lot from it.

$@#$@#$@#$@#, this is amazing! I’ve been trying to get more people into modern and model based control for a while, and was never able to find an intro a fraction as useful and well written as this. I’m shocked at how high quality and well put together you’ve managed to make the guide. Thank you for spending time making this great resource! I have a feeling I’m going to learn some stuff as well :wink:

Do you accept pull requests for any improvements or expansions or anything?

Please tell me this is real and I am not dreaming.

This is an amazing resource! Thank you so much for creating it! If you are a fearless individual, look at the FRC discord programming channel and you will see how much we appreciate this resource. I will be spending much time digging into this.

Beautiful work, beautiful design even. :ahh:

For the record I’d totally buy this if you release a hardcover version :wink:

1 Like

Thanks! :slight_smile: Pull requests for additions and corrections to the book at are welcome. The README has a list of todo items toward the bottom as well I haven’t gotten to yet. The Python scripts mentioned toward the end of the book reside in

Not all heroes wear capes, some are just named Tyler Veness. Quickly skimming through the paper, it looks so well-written and I can’t wait to study up on it. In terms of control theory, what did your team use last year, and what are you hoping your students can achieve next build season? I’d imagine a huge learning curve but I’m glad you’re spending the offseason with dedicated students that are willing to learn!

We did standard PID stuff last year with heuristic trial-and-error tuning. For our elevator, we used a PID controller with a constant feedforward for gravity. For auton driving, we used two PID controllers, one for forward driving with averaged left and right encoder readings for feedback, and one for heading angle with a gyro for feedback. The outputs of the two were summed and applied to the left and right sides.

We had to write our own controller for this because WPILib’s PID controller only works for single-input, single-output systems while the drivetrain thing we were doing was multiple-input multiple-output. It worked OK, but even with a practice robot, we didn’t have any time to test more complex auton modes because we kept chasing hardware issues whenever the software subteam was given the robot.

For now, we’re just trying to implement state-space controllers in last year’s robot code and write unit tests against it with Google Test. The idea is that by modeling the robot and writing tests in this way, we can verify our software works independently from the hardware. It should also help cut down on the amount of trial-and-error tuning we have to do. The goal for next year is to have all the subsystems that use feedback use a state-space controller, but we’ll see how far we get. We use GradleRIO, but it doesn’t really have a way to build and run tests that we know of. WPILib is working on this for 2019.

This is the single most helpful resource I’ve found on Chief Delphi. Great work!

I would totally buy this. Everything about this resource is absolutely beautiful. Thank you so much for all your hard work and sharing this with the community!

As someone who is decidedly not a programmer but has looked into control methods before, I appreciate that you actually define the variables; it’s remarkable to me how many resources out there just assume you already know what all the random letters in a formula stand for.

Wow. I’ve been trying to figure out ways to get more advanced in control systems for a while now (to no avail). At first glance, this seems like everything I didn’t know I needed. Thank you so much! I’m super excited to jump into this.

Thanks for posting this!
This will be a great tool when trying teach control theory.
I think there are some places that could use some more explanation for a high school student. Most of this is a refresher for me, and some of the material was still a little dense.
Hopefully I’ll get some time to submit some of the typos I’ve found.

Again, great work on this so far!

Yea, that’s the place where this needs the most work (it’s the first thing in the “future improvements” part of the readme, actually). I’ve been making corrections and additions throughout the week, and I just pushed my latest todo list. The biggest outstanding issue for me right now is rewriting the introduction to the Laplace domain.

Thank you. This is an amazing resource.

Ive been trying to digest this since last August. This is brain candy. Thank You so much.

Yes please! As a mechanical engineer who took little to no classes about transfer functions, that Laplace section was a smack in the face. The information there is not explained fully enough.

I get that Laplace transforms turn functions of time into functions of frequency which somehow manages to turn hard ODEs into algebra problems, but thats where my knowledge is limited. I like to see proof of things and see where and how people like Laplace came up with these formulas because it isn’t immediately obvious to take a Laplace transform if you never knew about it. I guess I could just google the derivation for the Laplace transform integral, but for the sake of completeness I would much appreciate some background behind why the Laplace transform integral formula is the way it is and how exactly is works. Kind of like a gut intuition or something. Also, I’ve been able to make progress with just getting the ODE of my system from physics and converting it to state-space form just fine, so I dont understand why we need to know about transfer functions for control theory. Is it just another way of solving for the poles or eigenvalues, or are there inherent benefits of both state-space modeling and transfer function modeling?

The “gut intuition” you’re looking for is the notion of a projection.

It will help if you draw pictures while going through the following explanation:

Imagine two-dimensional Euclidean space, \mathbb{R}^2 (i.e. the standard x-y plane). Ordinarily, we notate points in this plane by their components in the set of basis vectors \{i, j\}, where i is the unit vector in the +x direction, and j is the unit vector in the +y direction.

How do we find the coordinates of a given vector v in this basis? Well, so long as the basis is orthogonal (i.e., the basis vectors are at right angles to each other), we simply take the orthogonal projection of v onto i and j. Intuitively, this means finding “the amount of v that points in the direction of i or j.” More formally, we can calculate it with the dot-product - the projection of v onto any other vector w is equal to \frac{v \cdot w}{|w|}. (Since i and j are unit vectors we see simply that the coordinates of v are v \cdot i and v \cdot j.) (We can also see that “orthogonal” can be defined as “has zero dot product.”)

But we can do this same process to find the coordinates of v in any orthogonal basis. For example, imagine the basis \{i+j, i-j\} - the coordinates in this basis are given by \frac{v \cdot (i + j)}{\sqrt{2}} and \frac{v \cdot (i - j)}{\sqrt{2}}. Let us now “unwrap” the formula for dot product, and look a bit more closely:

\frac{v \cdot (i + j)}{\sqrt{2}} = \frac{1}{\sqrt{2}}\sum_{n}{v_{n}(i+j)_{n}}

So, what have we really done to change coordinates? We expanded both v and i+j in a basis, multiplied their components, and added them up.

Now, the previous example was only a change of coordinates in a finite-dimensional vector space.
However, as we will see, the core idea does not change much when we move to more-complicated structures. Observe the formula for the Fourier transform:

\hat{f}(\xi) = \int_{-\infty}^{\infty} f(x)\ e^{-2\pi i x \xi}\,dx, where \xi \in \mathbb{R}

What we need to see, here, is that this is fundamentally the same formula that we had before. f(x) has taken the place of v_{n}, e^{-2\pi i x \xi} has taken the place of (i+j)_{n}, and the sum over n has turned into an integral over dx, but the underlying concept is precisely the same. To change coordinates in a function space, we simply take the orthogonal projection onto our new basis functions. In the case of the Fourier transform, the function basis is the family of functions of the form f(x) = e^{-2\pi i x \xi} for \xi \in \mathbb{R}. Since these functions are oscillatory at a frequency determined by \xi, we can think of this as a “frequency basis.”

Now, the Laplace transform is somewhat more complicated - as it turns out, the Fourier basis is orthogonal, and so the analogy to the simpler vector space holds almost-precisely. The Laplace basis is not orthogonal, and so we can’t interpret it strictly as a change of coordinates in the traditional sense. However, the intuition is precisely the same: we are taking the orthogonal projection of our original function onto the functions of our new basis set:

F(s) =\int_0^\infty f(t)e^{-st} \, dt, where s \in \mathbb{C}

Here, it becomes obvious that the Laplace transform is a generalization of the Fourier transform, in that the basis family is strictly larger (we have allowed the “frequency” parameter to take complex values, as opposed to merely real values). The upshot of this is that the Laplace basis contains functions that grow and decay, while the Fourier basis does not.

I hope this clears it up a bit.