Control Systems

Hi, I am a programmer for team 2468, and have recently been getting into control system theory. We use PID for all of our subsystems, and it has worked well for us, but I know that 971 is known for their control systems, and had a couple questions for @AustinSchuh about their control system. I wanted to know what kind of control system y’all used(SISO or MIMO), and the advantages to it. I also would like to know what the inputs and outputs are, as well as the states for a certain subsystem on your robot. Would love to hear back from any team, not just 971.

Very similar to an ongoing topic, which has received a post in the last hour and several today:

We’ve used a bunch of controller and observer types over the years. I’ll try to list them all out, and some of their properties, but you are going to have to ask some clarifying questions.

For terminology, MIMO and SISO are properties of controllers, not controllers.

95% of our controllers are statespace controllers. We rarely use PID anymore, though for ideal brushed DC motors, PD controllers are equivalent to statespace controllers. The larger benefits come from smoother and more accurate state estimates enabled by having a statespace model and using it to build various kalman filters and observers.

We’ve done SISO and MIMO direct pole placement controllers, and MIMO and SISO LQR controllers. Our current drivetrain spline following controller is a MIMO controller which we redesign every control loop cycle around the current operating point. Our 2018 arm was also a LQR controller which was re-linearized every control loop cycle.

Somewhere around 2016, we discovered that for MIMO systems, direct pole placement was resulting in non-symmetric controllers. The left and right sides had different gains. That was resulting in the robot not driving straight. After that, we’ve been using LQR controllers for everything but flywheels, where the 1 state system is simple enough that it doesn’t matter. I’m a big fan of optimal controllers.

We’ve dabbled with MPC controllers, but haven’t gotten one working well enough to run on the robot. We haven’t done anything with sliding mode controllers either.

For us, the benefits are more faster and precise control at a higher CPU and cognitive cost. PID has 3 knobs. A comparable SS controller with a matching kalman filter will have 6 knobs for a mass hooked to a motor, which represents most SISO FRC systems. The math is significantly harder and takes much more understanding to use well. You also can’t use the built-in PID controllers on the talons and falcons and such.

Our left/right position controller for our drivetrain is a 4 state LQR controller paired with a 7 state kalman filter. The controller states are [left position; left velocity; right position; right velocity]. The KF states are [left position; left velocity; right position; right velocity; angular_wheel_slip; left_voltage_error; right_voltage_error]

Our elevator controller last year is a pretty representative. It was a 2 state LQR controller paired with a 3 state kalman filter. Controller states are [position; velocity] and KF states are [position; velocity; voltage_error].

The almost more important question is what observers we have used and currently use.

We mainly use linear Kalman filters for most systems. We’ve been adding hybrid kalman filters to the mix (they don’t assume that dt is constant), and also have been using extended kalman filters to estimate on-field locations. We are adding a down estimator this year built using an unscented kalman filter. For everything but a drivetrain, a standard kalman filter should be good enough.

5 Likes

And for a team just starting to get into more advanced control systems, do you have any advice as to how we should get started with implementing these controllers. Also would you say that the step up between say PID and LQR is worth the time invested into upgrading the systems.

I am certainly no expert on the topic but, like you, have also been very interested in control systems. I recently found this youtube series and so far it has been one of the only good resources on the topic that I have been able to follow. This textbook on controls written for FRC students has also proven very useful.

1 Like

@AustinSchuh I have done a bit more research on the control systems and have gotten a lot more confident in my understanding of the models, but I still don’t know how the a and b matrices are formed. From the videos I have watched, it is dependent on the plant you are using. Obviously we are using DC Motors, but I haven’t found the specifics on what’s inside the matrices. Also for the u matrix, do you only factor in voltage or other things such as torque. Also, what do you mean when you say that you relinearized the controller, every control loop cycle. From the examples you gave, you factor in voltage. Have you had issues with voltage drop off, and does the model account for that well.

The down estimator in your 2019 code used a linear state and observation model. What nonlinear components did you add to improve performance?

Chapter 14 “Model examples” of https://tavsys.net/controls-in-frc shows how to derive a model for a DC motor, elevator, single-jointed arm, flywheel, and drivetrain using sum of forces and sum of torques. Chapter 8 “State-space applications” shows how to convert those into state-space notation. (The physics derivations come later because they tend to bog down lectures with my students when the overall goal is to teach them the controls aspects and why they should care.) In https://www.youtube.com/watch?v=RLrZzSpHP4E, Austin walks through the derivation for an elevator on a whiteboard. I’ll walk through state-space notation for a pendulum next, since it’s related to your next question.

The underlying model for a double-jointed arm is nonlinear due to gravity torque. I’m not sure what 971’s double-jointed arm model was in 2018, but the model for a single-jointed arm under gravity (aka a pendulum) should get the point across.

\ddot{\theta} = -\frac{g}{l}\sin\theta

where \theta is the angle of the pendulum and l is the length of the pendulum.

Since state-space representation requires that only single derivatives be used, they should be broken up as separate states. We’ll reassign \dot{\theta} to be \omega so the derivatives are easier to keep straight for state-space representation.

\dot{\omega} = -\frac{g}{l}\sin\theta

Now separate the states.

\dot{\theta} = \omega
\dot{\omega} = -\frac{g}{l} \sin\theta

State-space notation requires the model to be a linear combination of the states and inputs (addition and multiplication by constants). Since this model is nonlinear on account of the sine function, we should linearize it to fit into state-space notation. This finds a tangent line to the nonlinear dynamics. The Taylor series is a way to approximate arbitrary functions with polynomials. Recall y = mx + b is a polynomial, so we can use the Taylor series to linearize.

The taylor series expansion for \sin\theta around \theta = 0 is \theta - \frac{1}{6} \theta^3 + \frac{1}{120} \theta^5 - \ldots. We’ll take just the first-order term \theta to obtain a linear function.

\dot{\theta} = \omega
\dot{\omega} = -\frac{g}{l} \theta

Now write the model in state-space representation. We’ll write out the system of equations with the zeroed variables included to assist with this.

\dot{\theta} = \;\;\;\,0 \theta + 1 \omega
\dot{\omega} = -\frac{g}{l} \theta + 0 \omega

Factor out \theta and \omega into a column vector.

\dot{ \begin{bmatrix} \theta \\ \omega \end{bmatrix}} = \begin{bmatrix} 0 & 1 \\ -\frac{g}{l} & 0 \end{bmatrix} \begin{bmatrix} \theta \\ \omega \end{bmatrix}

This model has no input, of course. You could add an angular acceleration term (proportional to torque) as an input.

Assume you apply, say, 5V to an elevator. The model may say it moves some amount, and in reality it moved less than that due to friction, battery voltage sag, gravity, etc. Voltage error estimates the hypothetical extra voltage you’d have to apply to make the model match reality.

\mathbf{x}_{k+1} = \mathbf{A}\mathbf{x}_k + \mathbf{B}u_{error,k} + \mathbf{B}\mathbf{u}_k

You can augment your state vector \mathbf{x} and move the first \mathbf{B} term into your \mathbf{A} matrix.

\mathbf{x}_{aug,k} = \begin{bmatrix}\mathbf{x} \\ u_{error,k}\end{bmatrix}
\mathbf{x}_{aug,k+1} = \begin{bmatrix}\mathbf{A} & \mathbf{B} \\ \mathbf{0} & \mathbf{0}\end{bmatrix} \mathbf{x}_{aug,k} + \begin{bmatrix}\mathbf{B} \\ \mathbf{0}\end{bmatrix}\mathbf{u}_k

Note how the third state u_{error} has no dynamics. The state estimator chooses a u_{error} at every timestep that minimizes the difference between the estimated outputs (that is, what the outputs should be based on your state estimate), and the measured outputs.

You can subtract u_{error} from your controller to counteract the disturbance.

4 Likes

As the driver would drive over the platform with one wheel, he would do a motion in 3 space which wasn’t represented in our linear system. He would turn while at an angle to the ground, and we would integrate that incorrectly.

We implemented http://kodlab.seas.upenn.edu/uploads/Arun/UKFpaper.pdf. An unscented kalman filter based on quaternions. So far, it has been pretty good.

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.