I know for general applications in FRC, a PID controller is more than enough, that being said it got me curious, what other control loops are their besides a PID and a bang bang controller?
Well, the main other one that comes to mind is Logic Control, or Ladder Logic Control. Traditionally, this was a “hardware” solution using series of inter-connected relays to control systems using what is called Ladder Logic.
In modern times, Ladder Logic has become a programming concept commonly used when programing PLCs (Programmable Logic Controllers), referred to as rule-based languages these are often graphical in nature where you diagram your logic. This is very similar to what LabVIEW and the cRIO/RoboRIO are meant to do, or be able to do, though LabVIEW is MUCH broader in scope, and the RIOs much more powerful than the few PLCs I’ve seen, so you do not need to necessarily use ladder logic with it.
Do be mindful that I don’t actually know much about Ladder Logic, and may have misrepresented some of what I said above. I only know what I was exposed to interning for ~5 weeks at a manufacturing plant my Senior year of High School ~7 years ago before they moved me to a more “fitting” internship for my later goals in life, and later in college helping with another group’s Senior Capstone Project.
I’m sure there are others, but here’s one I remember reading about:
State-space control! It’s based on the idea that if you know the physics of your system and can predict how it’ll react to a given input, then you can also tune the system in a way that’s analogous , but more general, to tuning PID controllers to provide the response you want. It requires knowing or estimating the “state” of the system being controlled, aka all variables required to tell you its future state.
For a typical DC motor, there are 3 states: position, velocity, current. However, for FRC motors, the inductance is very much negligible so current “instantaneously” changes, meaning we only need to consider position and velocity. A SS controller gives you an input to apply that is (generally) a linear combination of errors in the states. In this sense, PD for controlling motor position is a state-space controller, since you know the position and velocity states and set the voltage to be kP*(position error) + kD*(velocity error). A PID controller essentially adds another state to the system, the integral of position (aka absement). The benefit of using a state-space method is that it is much more general than PID. For example, on a differential drive base, the motors on each side are not independent and do end up exerting forces on each other. By properly modeling this, a state space controller can find voltages to apply to both sides that give an acceptable and tunable response that are linear combinations of both sides’ velocities, not just one.
Another benefit is that for SS, there’s many ways to tune it. PID is very much about guessing and checking, but once you know how the system works physically, you can take your model into another program like MATLAB and tune the controller there, where hardware cannot break. And there’s many ways to design a SS controller as opposed to the guess and check of PID. Pole placement lets you specify the system’s “poles” which basically describe how quickly the error falls. LQR (linear quadratic regulator) lets you find an optimal balance between error and control effort (so you get where you want but don’t drain your battery. You might’ve seen “LQR” already in WPI’s characterization tool, where you can specify a maximum error and voltage and it’ll calculate the optimal P or PD constants using LQR). MPC (model predictive controller) lets you put more constraints on the system. In general its more robust and adaptable to any problem, where PID really fails once you have more than 2 states or 1 input.
I’d recommend the Controls Engineering in FRC book to learn more about it. It is very math intensive and very much overkill but still really interesting. I’ve been designing a state space system for our drive train these past few days since we couldn’t get PID to reliably control the drive base and the robot will turn quite a lot just in open loop. State-space would account for that well
You may be interested in my team’s 10-state MIMO drivetrain controller.
We also had a custom trajectory constraint that used the drivetrain model’s linear subspace (voltage input, velocity output) to compute min and max dx/dt.
https://github.com/frc3512/Robot-2020/blob/master/thirdparty/include/frc/trajectory/constraint/DrivetrainVelocitySystemConstraint.h
https://github.com/frc3512/Robot-2020/blob/master/thirdparty/cpp/frc/trajectory/constraint/DrivetrainVelocitySystemConstraint.cpp
I’ll be updating the book in a few days with this 10-state model. I just need to take care of some content reorganization first (you can see some of it committed on GitHub already).
Oh that’s interesting. If I may ask, what made you guys use voltage error? We’ve just been considering the effects of voltage error as noise to the system. I look forward to seeing the update!
P.S. Thanks for writing the book! It’s been a great reference to learn more about control theory and how to implement it specifically for FRC!
Assuming voltage error is noise implies it’s zero-mean. That hasn’t been our experience.
We had two main sources of modeling error throwing off our state estimate: battery voltage drop reducing the motors’ control authority, and turning scrub. The main benefit of U error estimation is that once you’ve estimated the errors, you can include them when making future predictions. This makes the estimates more accurate more accurate. This means the measurements don’t have to correct as much after the fact. We separated the two sources of error into different estimates because they are linearly independent effects with different weights and units. We initially didn’t have an angular error term, and it led to oscillations in the error estimate.
it’s worth noting that starting with an accurate model really helps cut down on oscillations caused by U error estimation. If the model consistently predicts it will go faster when applying an input, it’ll overshoot the true state. U error will drag it back down and converge to zero, but the next prediction adds to U error again. We saw it ping-ponging between no U error and a lot, which caused chattering. Characterizing our drivetrain properly fixed this. The system ID stuff in our state-space lib uses Kv and Ka for linear and angular movements (drive in a straight line for linear and spin in place for angular). We used frc-characterization to get Kv for both, but we had to tweak the Kas in a Kalman filter sim against real data because frc-char doesn’t sample fast enough to capture our acceleration dynamics; its Ka results were usually just noise. frc-char missed them because it only samples every 20ms. NT has a lower limit of 10ms, so we’ll need to rework frc-char to support 5ms. I’m looking forward to when we can just let the tool do it and not have to stir the data around with a stick ourselves.
We could have added the U error estimates to the controller, but we didn’t notice much of a performance difference. Large voltage errors were usually caused by actuator saturation. U error estimation would say “throw more voltage at the problem to compensate”, but there’s no more to give. U error estimation mainly helps drive our model.
Thanks.
Do you know of any other resources for learning or clarifying some of what that book teaches? I’ve been trying to understand it for several months now but I often get confused on some of the vocabulary or notation used. I’m taking a course on Differential Equations and Linear Algebra so that should help a bit, but I was wondering if you had any suggestions on how to better utilize the book.
Huh okay, that makes sense. When updating your robot’s position, do you use the measured voltage or the voltage you tried to apply, corrected with the error?
Do you know how frc-characterization calculates acceleration? As I mentioned, our robot’s drive train is very mechanically biased so I decided to account for it by assuming both sides have different constants. Because everything good in life is linear, we have x[k + 1] = Ax[k] + Bu[k] where x = [left velocity, right velocity]’. Then I ran a linear regression to find A and B, without assuming they were just diagonal, using the raw JSON file it outputs. Then I converted the system to continuous time to sanity-check the constants, but that also lets you calculate the theoretical accelerations. If characterization calculates a by doing dv/dt then maybe this method would be more reliable?
I found that complementing it with reading Wikipedia articles (it’s reliable I swear!) and online PDFs especially from MIT’s OpenCourseWare and textbook you can find, worked really well. I do think control theory is one of those topics where getting the info in many different ways helps to solidify it. I believe MATLAB also made some video lessons about state space control? I’ve seen their series on MPC and it was pretty good so the state space series probably is good too.
File an issue on GitHub with any sections or vocab that was confusing and I’ll work on fixing them: Issues · calcmogul/controls-engineering-in-frc · GitHub
We don’t measure voltage. From the encoder position and heading measurements, U error estimation estimates the difference between the desired voltage and a hypothetical voltage that makes the model match the observed behavior. We add that difference to the model prediction to make it follow the observed behavior more closely.
It doesn’t calculate acceleration explicitly. It does a linear multiple regression on position if I recall correctly. @Oblarg can expand on that.
If your drivetrain doesn’t drive straight, I strongly recommend fixing your drivetrain. We’ve spent months working around a mechanical problem with controls, and in the end, fixing the problem in hardware saved everyone a lot of time and frustration (in this case, it was backlash in a four-bar lift causing oscillation). There’s a time and place for fancy controls, but try to avoid fixing a hardware problem in software.
It could be something like chain tensioning or gearbox friction, for what it’s worth. This kind of thing is much easier to address in hardware, so neither the feedforward nor the feedback should need to care.
Doing the model in the discrete domain isn’t desirable because the sample period of the model on the real robot isn’t constant due to scheduling jitter. At 200Hz, this is very noticeable and will inject a lot of noise into your state estimate (think 1ms of jitter for every 5ms). Discretizing your continuous model on-the-fly addresses this.
By the way, theorem 15.2.1 in section 15.2 of the book has the model we used to convert the two Kvs and two Kas to a state-space model. We used https://github.com/frc3512/Robot-2020/blob/master/tools/characterize_with_ekf.py to find Ka for linear and angular. You can also replay recorded voltage inputs and tweak Ka until it follows the measurements well in sim.
Wait, but everything is a software problem!!! In all seriousness, our meetings have been cancelled due to health concerns so we can’t work on the robot right now. So I’ll keep designing the controller since I can do that from home. Plus it’s a learning experience We’ve only ever used PID on the drive base before.
Our Kalman filter hasn’t been finalized yet, but I’ve been testing it in MATLAB at 50Hz. Maybe we can up the speed later, but it’s given good performance. And we only get LimeLight data at 90Hz so at 200Hz we’d be using old measurements, but to be fair at that speed it’s not really a problem! If we do increase it though, then it’ll be pretty easy to account for it by discretizing it on the fly as you said. We do already store the continuous time constants in the code. We’re using an unscented Kalman filter and transforming the sigma points by using an ODE solver, so we already use continuous time dynamics - all it would take is just to make dt variable and measure it. Accurately getting the continuous system is the hard part and as you said the way we did it isn’t very rigorous. But we did get a R^2 of like 0.999 so it’s probably fine. I’ll take a look at how frc-char is doing it and explore other more rigorous approaches if we want a lower period. Maybe some sort of gradient descent to find continuous A and B that get discretized between time steps?
Actually, measurement delay is a much bigger problem than the sample rate, because the vision pipeline latency is nonnegligible. If you’re incorporating measurements from 200ms ago, you’re going to incur a lot of noise in your state estimate at the very least, if not severe oscillation in your target tracking. This can also quickly lead to controller instability if your LQR gains are too aggressive (they tend to be). To address this, you have to keep a backlog of states, inputs, and error covariances, find the entry corresponding to the vision measurement’s timestamp, apply the correction, then replay the inputs and measurements that came after it in the backlog up to the current time. We call that latency compensation.
Alternatively, you could reduce your controllers gains so the latency has less of an impact, but that only works to a point, and has lower performance than is possible if you incorporate your measurements at the correct timestep.
You may be interested to know that a team of students on the FRC Discord (@thatmattguy, @pietroglyph, and @Prateek_M) and I are working on C++ and Java libraries for state-space targetting the 2021 season.
It includes support for:
- On-the-fly discretization of linear models
- Numerical integration of nonlinear models via Runge-Kutta 4th order
- Linearization via numerical Jacobians
- Linear-Quadratic Regulators (the DARE is solved online in the constructor)
- Kalman filters (the linear kind)
- Extended Kalman filters
- Unscented Kalman filters
- Adapters for system identification results from frc-characterization
The RIO is fast enough for all of that, so the idea is you never have to export gains from MATLAB or Python, and you can do all your sims in unit tests. That’s what FRC team 3512 did for 2020 because we got the competition robot to ourselves one day before competition. The only thing that didn’t work was what we didn’t have time to unit test, which was our autonomous mode state machines.
Most of the development right now is on documentation, tutorials, and latency-compensated odometry-vision fusion classes, which use a Kalman filter at their core. We figure teams won’t use this stuff much unless we can provide abstractions for common use-cases.
You can actually be much more efficient than that if the model is linear.
e^{\begin{bmatrix}\mathbf{A}_c & \mathbf{B}_c \\ \mathbf{0} & \mathbf{0}\end{bmatrix} T} = \begin{bmatrix}\mathbf{A}_d & \mathbf{B}_d \\ \mathbf{0} & \mathbf{I}\end{bmatrix} discretizes an (\mathbf{A}_c, \mathbf{B}_c) pair to a timestep T. You could invert this to obtain the continuous matrices from discrete ones.
\begin{bmatrix}\mathbf{A}_c & \mathbf{B}_c \\ \mathbf{0} & \mathbf{0}\end{bmatrix} = \frac{1}{T} log\left(\begin{bmatrix}\mathbf{A}_d & \mathbf{B}_d \\ \mathbf{0} & \mathbf{I}\end{bmatrix}\right)
You did mention you were working on a drivetrain model though, which would imply it’s nonlinear. If you’re using range measurements from the Limelight, the measurement model may be nonlinear as well, so using a UKF may make sense. However, if the model really is nonlinear, you can’t use the approach above with the matrix logarithm and would be better off with a more general numerical differentiation strategy.
Instead of doing any of that though, you should really just use a continuous model and discretize based on the measured timestep with Runge-Kutta. The RIO is fundamentally a discrete system, so undiscretizing and rediscretizing at a different timestep is a waste of resources.
Yeah, that’s one reason I didn’t want to design the estimator at anything above 50Hz, it’s a lot of hassle getting latency compensation implemented properly.
Yes I am very interested to know that! I’ve been struggling to make a Java library for state space control for my team to use this whole season, and it would be great to see built in support from WPILib. Is there any way I could get involved with this? I’d love to contribute!
The equations for x and y add the nonlinearity which is why I’m just considering velocity terms and then adding x’ = v cos theta and y’ = v sin theta afterwards (already using Runge-Kutta to update these). The method you described is exactly how I’m solving for the continuous system given Ad and Bd. But what I’m saying is that between samples, x(t + dt) = Ad(dt) x(t) + Bd(dt) u(t) where Ad and Bd are calculated using your first equation, and dt may not be the same between samples. So you could estimate Ac and Bc by doing linear regression assuming constant sample time on Ad and Bd then converting to discrete, and then correcting it using gradient descent without assuming a constant sample time by discretizing between samples with the individual dts… if that made any sense
Hi, I’m one of the students working on the State-space library that’s hopefully getting merged for 2021. If you want to chat I’m pretty active on the FRC Discord (https://discord.gg/frc) in #programming-discussion, I’m ThatMattGuy#5462. We can find something for you to help with if you’re interested.
Our experiences have been the opposite. We’ve seen good performance gains from using the voltage error in our controllers as well as our model. It does a pretty reasonable job of estimating friction and other disturbance forces, while having reasonable behavior when saturated. We added it in 2016 to deal with all the friction and error we were seeing with pneumatic tires. All our controllers since then have used it.
We are using the following definition of voltage_error for a SISO model. It scales up as one would expect.
Original system:
X(n + 1) = A X(n) + B U(n)
States → [position; velocity]
Augmented system:
States → [position; velocity; voltage_error]
X(n + 1) = A X(n) + B voltage_error(n) + B U(n)
voltage_error(n + 1) = voltage_error(n)
The left/right voltage error and angle error worked fine in unit tests with simulated disturbances. We were seeing the real system lag behind the model, then the u error estimate would spike, then we’d apply it to the input in the next timestep, then the u error estimate would go back down to zero, then the model would lag behind again, then repeat. It caused lots of drivetrain chattering. We did recharacterize our drivetrain and feedback control improved, so it could have just been a really bad model instead of U error in particular.
The oscillations could have also been scheduling issues or input delays we weren’t accounting for. For scheduling, we used a modified version of the WPILib notifier class with its internal thread set to an RT priority of 50 (fyi, WPILib gets a signal from the FPGA on the next wakeup, then pings a condition variable the user’s thread is waiting on). We used to run each controller in a separate RT thread, but we saw weird periodic blips in the error covariance that I assume were caused by them being woken up in the same stretch of time and fighting the scheduler for resources. Moving them all into a single RT thread which runs them serially fixed that. It also helped with determinism, because our turret needed the predicted drivetrain state when calculating the next turret heading reference; running them serially ensured the data was always from the correct timestep, because separate threads can be scheduled out of order. I guess we just need a good way to deterministically schedule tasks.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.