Auto Tuning PID

Found a thread from 2011 where they talked about a PID auto tuning wizard for the cRIO (Autotuning PID loops tuning). Does this kind of thing exist currently? It would be nice to be able to set your robot in the middle of the floor and have it run a routine that automatically tunes the PID for the drivetrain or for any of the mechanism where you need PID tuned. Would that even be feasible?

This may be of help, especially with drive trains. There is a new tool being made for this called sysid.

1 Like

frc-characterization exists, which is really nice for the cases it can calculate gains for

sniped by mdurrani834 once again

1 Like

Disclaimer: The below is my “working thoughts” on this topic. I’m going to ramble about it a bit. Maybe it will be enlightening.

Within numerical analysis, there is a term known as a “norm” of a function. The norm calculates how much that function differs from 0. There are many different norms: 1-norms, 2-norms, …, infinite norms. The choice is based on the application.

By subtracting 2 functions, f(x) and g(x) from each other, this new difference function’s norm can be used to tell how similar f(x) and g(x) are.

The point of motion profiling is to get a robot’s movement function to match that of a desired function. Thus, you could write a program that tried to adjust parameters P, I, and D to minimize the difference norm.

I would still suggest using the tool suggested by @mdurrani834

1 Like

Just tossing optimization at an arbitrary function usually fails or at least does not generalize. Problems must be carefully constructed to be convex. MPC and NN based control in particular require particular consideration to apply because physical systems are often not convex or difficult to fit to.

1 Like

Do you have any sources that I and others could learn more on this topic?

Also, for all practical purposes, aren’t most PID’s in FRC tuned by arbitrarily changing tuning parameters until it “looks right”?

Here’s something to start with.

An example of a convex optimization problem would be LQR (a grid of P controllers mapping each state error to an input). The optimization problem is defined as

\begin{align*} \min\limits_u &\int\limits_0^\infty (x^\mathsf{T}Qx + u^\mathsf{T}Ru)\,dt \\ \text{subject to } &\frac{dx}{dt} = Ax + Bu \end{align*}

That is, find the controller u that minimizes the sum of squares of the error and the control input subject to the dynamics of your system; Q and R are weights to tell the optimization problem how much your care about each state and input.

x^\mathsf{T}x is how you square vectors. The integral is a sum of squares, which is a quadratic form. Quadratics are convex, so a minimum exists (I’m ignoring all the rigorous proofs here for clarity). The minimum is found by taking the partial derivative of the cost function with respect to u, then solving for u. There’s other complications involving the constraints I’ll ignore here. You can read this for more.

The solution ends up being u = -Kx where K is a constant matrix of gains. Essentially, a bunch of proportional controllers whose outputs are added together.

An example system would be an elevator. It has position and velocity states and a voltage input. The model from sysid would be

x = \begin{bmatrix}position \\ velocity\end{bmatrix} \quad u = \begin{bmatrix}voltage\end{bmatrix}
\frac{dx}{dt} = \begin{bmatrix} 0 & 1 \\ 0 & -\frac{K_v}{K_a} \end{bmatrix} \begin{bmatrix} position \\ velocity \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{K_a} \end{bmatrix} \begin{bmatrix} voltage \end{bmatrix}

The controller you get is u = \begin{bmatrix}K_p & K_d\end{bmatrix}\begin{bmatrix}position \\ velocity\end{bmatrix}. K_p and K_d are the proportional and derivative gains you’re familiar with from PID.

Note that all of this assumes you’re driving the state to zero, but you can easily drive it to wherever by adding the setpoints for each state (so you plug in position error and velocity error to the controller instead of position and velocity).

Also, \frac{dx}{dt} = Ax + Bu is a linear system. If you have a nonlinear system like a drivetrain in field coordinates, you can do what these guys did.

Most, yes. WPILib does have a LinearQuadraticRegulator class you can play with though. You give it your system (possibly from characterization gains) and state/input tolerances, then it generates controller gains internally. You use it just like PIDController after that. There’s an example here.


The sysid tool really is quite good, and the gains often don’t require much, if any, adjustment (assuming, of course, you’ve entered them in the right units, which is still lamentably difficult to do).

1 Like

For almost all practical purposes.

If the “success/failure” criteria of the controller is to win matches, there are a lot of correct answers. Enough that even high schoolers can find a correct answer with fairly minimal prompting or advice.

However, if the goal is exploring how to do things “right”, this thread has the answers.

FWIW, outside of academia, few people care about doing things right.

1 Like

Yep. Few controls jobs do anything beyond manually tuned PID control on a PLC. Employees are a company’s biggest expense, and controls engineers are even more expensive than your average software engineer. It’s cheaper for the company to use the simplest controller they can make and have software developers or field technicians do the tuning.

With that said, there’s a few companies making self-driving cars, and those need fancy math.


Awesome, thanks, this is what I was suspecting, but I really wanted someone to say it and validate my thoughts.

@calcmogul Thanks for all the resources. I’m currently still in the “get the basics down right” phase of programming project management, but I’m very excited to dive into advanced controls once I’m able to.

1 Like

Also since Tyler is allergic to self promotion, his book at doesn’t have a lot of tutorial, but is a great reference and guide to other exploration.


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.