# Whitepaper: Swerve Drive Skew and Second Order Kinematics

Team 449 made our first swerve drive a few months ago, and one of the first things we noticed is that when driving straight and spinning at the same time, the robot always skews in the direction of rotation. I’m sure many of you have also encountered this problem - if you’re wondering whether it’s happening because of an error in your code, the short answer is no.

We spent a while trying to debug this issue, and through asking online we settled on the theory that since swerve constant translation + rotation requires time-varying module velocities and headings, and the kinematics don’t include these time variations in velocities and headings.

I put together a simple simulator to attempt to reproduce the skew and validate this theory. This model has a simulation interval for handling physics and a loop interval representing the robot controller. While the model doesn’t use any sort of feed-forward based model to simulate the response of the modules to updated setpoints, it is able to vary the delay between when a linear velocity is commanded and when it is actually assigned to the modules, and can also independently vary the delay between when a heading is commanded and when it is assigned.

The following animation shows four different robot simulations using the first order kinematics: one with no delays, one with only velocity delay, one with only angular delay, and one with both delays. The simulated robots are otherwise identical and are commanded to move to the right at 1m/s and rotate at 1 rad/s. The goal/theoretically perfect pose is shown dashed in orange, while the simulated robot pose is shown in blue.

You can see that skewing occurs in all four cases, with both types of delays causing additional skew. Even in the no-delay case (representing perfect, instantaneous swerve modules), some skew still occurs - this is due to the discretization of the model.

I then solved the second order kinematic equations (derivation in PDF below) and implemented those into the model. This allows the module controller to set the linear acceleration and angular velocity needed to cancel the effect of discretization, and is also a more general solution where you can command specific instantaneous linear and angular robot accelerations in addition to the first-order linear and angular robot velocities.

The following animation shows the four simulations from the previous animation, alongside the four second-order models given the same inputs. The second-order model tracks perfectly in the no-delay case, but it still skews in the other cases that have delay, though not quite as much as the first-order model. It’s also worth noting that the second-order model does not have the same direction wobble as the first order model does.

Finally, adding a simple P-loop on top of this causes all of the models to track well, though the first-order model still has some wobble.

I’m not sure if the second order model is worth implementing on a competition robot, but based on the results of these simulations I think it might be worth a try. In addition to helping reduce skew in teleop, it could also be used to feed the robot linear and angular accelerations from an auto routine, which is currently not possible (as far as I know).

The python simulation can be found here.

And the second order kinematic derivation and equations can be found in the attached PDF. Please let me know if you find any errors or have any questions or insights.

Swerve_Drive_Second_Order_Kinematics.pdf (163.7 KB)

104 Likes

Wow this is some good work. It would take someone more ambitious than I to update the SwerveDriveKinematics but this is FRC and a lot of people love the challenge.

8 Likes

This is super cool and useful for making autos more accurate. Thank you for sharing! If we have time to inement this, we will.

5 Likes

Fantastic work. The discretization error alone is interesting, but the interactions with response delay and feedback are super enlightening.

It seems integrating a reference pose with the full dynamics (implicitly or explicitly) and performing feedback on that pose should have very real gains for most teams.

14 Likes

We also noticed this skew this season and implemented a quick solution to deal with it in 2022.

We still used first order (velocity) kinematics, but instead of commanding the instantaneous velocity the driver sticks (or autonomous routine) was asking for, we looked forward one control loop (10ms for us) to figure out where we wanted to be, then backed out a twist command from that. In other words, in order to drive forward in X with positive rotation, we’d actually want to command a small negative Y velocity as well:

Pose2d robot_pose_vel = new Pose2d(mPeriodicIO.des_chassis_speeds.vxMetersPerSecond * Constants.kLooperDt,
mPeriodicIO.des_chassis_speeds.vyMetersPerSecond * Constants.kLooperDt,
Rotation2d.fromRadians(mPeriodicIO.des_chassis_speeds.omegaRadiansPerSecond * Constants.kLooperDt));
Twist2d twist_vel = Pose2d.log(robot_pose_vel);
ChassisSpeeds updated_chassis_speeds = new ChassisSpeeds(
twist_vel.dx / Constants.kLooperDt, twist_vel.dy / Constants.kLooperDt, twist_vel.dtheta / Constants.kLooperDt);


This worked well and reduced our error substantially, but there was still non-zero translational error without a closed loop term. This is probably because of additional unmodeled delays in the actuation of our swerve modules.

41 Likes

This is awesome work, we have always known this to be a symptom of swerve but never looked into it this deeply. Will definitely look into this and review with my programming team, thanks!

2 Likes

This is great! We recently finished our pre-season swerve drive and we’ve noticed many of these little quirks. For us, unless implemented into wpilib we’ll just live with it for now, but you have a data proven model that can spread awareness to those much smarter than I. Thank you!!

2 Likes

Jared - I wasn’t actually aware of the pose exponential, so thank you for bringing that up. This led to another investigation with a pretty long explanation, but I think it’s worth reading.

From what I understand, the pose exponential allows us to map constant robot v_x, v_y, omega to a real-world path. Fortunately for us, constant module velocities + headings yield constant robot v_x, v_y, and omegas, which works great for the first order.

I wanted to see if I could find the equivalent of a pose exponential for the second-order solution, where constant robot a_x, a_y, and alpha would be used to solve a real-world path. This is doable, but its use for swerve depends on whether or not constant module accelerations + angular velocities yield constant robot a_x, a_y, and alpha.

To investigate this, I updated the plotting tool to be able to show the individual module velocities plus the lines perpendicular to them. The plot gets a little cluttered, but the goal is that I can model a second-order robot traveling with a given initial v_x, v_y, omega, a_x, a_y, alpha and see if the module linear accelerations + angular velocities are constant during the time step.

I first tested it with the first-order model. This robot is told to move right at 1m/s, and rotate starting at 0 rad/s but with angular acceleration = 0.3 rad/s. It also only updates its modules once per second. It’s a little noisy, but the arc it traces each second is the pose exponential that your Twist2D usage would correct for. What’s most important to note here is that the lines perpendicular to the module velocities all intersect at one point, meaning the swerve module acts as a rigid body and the modules don’t fight each other (which is an assumption of the kinematics)

This next animation is of a second-order robot with the same inputs. Here, the perpendicular lines start each 1 second interval intersecting at one point, but then deviate from there. This means that the modules are at some points fighting each other, which means that giving each module a constant angular velocity leads to violations in our kinematic assumptions, at least for large time steps.

I’m not completely sure what this means for the viability of using second order kinematics, but hopefully someone with more controls knowledge than I have could make good use of it. It still seems to be an improvement over the first-order for small time steps, but I would need to see empirical validation on a real robot to actually have confidence that it’s generally better and/or more reliable than the first-order kinematics.

9 Likes

Very interesting!

As I understand it right now the PID is using an ideal velocity and angular measurement. I wonder how the response would change if you added measurement delay and some random error.

Just as an over-arcing question or clarification rather. you’re suggesting that to have perfect or near perfect rotation while strafing, you would need to have a second order kinematics model that gets updated similar to a first order kinematics model that is standard for current swerve usages? And then top of that we need to have some control system to correct for any discrepancies in the kinematics? I may be completely off but looking for some clarification so I could know where to start in terms of programming such a thing. I’m also wondering if you could provide some clarification as to the statement

swerve constant translation + rotation requires time-varying module velocities and headings, and the kinematics don’t include these time variations in velocities and headings.

This is referencing that every module is accelerating and decelerating throughout time to maintain a certain robot velocity?

1 Like

We ran swerve for the first time this year and didn’t notice this at all. We used SDS modules with Falcon 500 on the Drive and Steer and we offloaded everything we could onto the Falcon hardware so loop time on the PID position loop for steering and PID velocity loop for driving were running at the hardware rate of 1ms. The module turn rate was lightning fast, you could hardly tell if they were taking the shortest path or taking the long way around when turning.

Admittedly the RIO is still running at 10mS loops on the module state calculations side of things so a discretization error is being introduced here, but I feel the extra processing time you are going to add with calculating higher order jacobians and taylor series expansions to compensate for this would be better compensated for with a fudge factor.

You would apply this fudge factor to a perpendicular direction to the demanded direction of travel with a sign that counters the effect of the spin.

To calculate the fudge factor, divide the Rotation rate/Linear velocity and multiple it by a number that you keep adjusting until you get it right. You could even adjust this calculation to give you a heading correction angle and then rotate the heading vector of the joystick (or the path follower for auto) by the calculated heading correction amount and hey presto it drives in a straight line again.

I know fudge factors are a bit “But that’s not exact”, but lets admit they get the job done and they are used in a lot in industry. For example how many teams out there adjusted their trackwidth or wheel circumference when they were trying to track the perfect motion profile on their tank drives.

On another point, how many teams have high school kids that understand what a Jocobian even is?

There’s no need to fudge this empirically; the single shooting problem is solvable in closed-form for the simple case of tracking a heading while spinning with first-order kinematics.

Including module response delay gets a bit trickier, but is probably doable with some optimization trickery.

2 Likes

These are all super interesting results! It makes sense that \dot{\theta}_m and \dot{v}_m would vary with time even in your toy example of translating in +X with positive rotation. If they didn’t then \dot{v}_m = C \neq 0 would imply that it’s forever increasing or decreasing which isn’t rly possible.

I still think what you’ve described is pretty valuable. Even if you only consider robot velocity setpoints and ignore robot acceleration, if you compute a fixed \dot{\theta}_m and \dot{v}_m for a control loop period and track some setpoints from the (Euler) integrated profiles \theta_m+\dot{\theta}_mt and v_m+\dot{v}_mt over the period then this should already be much better for these translate & spin maneuvers where \dot{v}_m \neq 0 and \dot{\theta}_m \neq 0.

I would also be curious how it stacks up against Jared’s pose exponential method. I suspect because of the implementation details (can use feedforwards on \dot{\theta}_m and \dot{v}_m) and smoothness (I think the pose exp math relies on being able to step change \theta_m and v_m), it could be better in practice even if Euler integration isn’t perfect. And you could use higher order numerical integration or maybe there’s a closed form solution but I’m not sure on that one. or just roll out the robot pose over the control period and compute \theta_m(t) and v_m(t) to get some setpoints.

Anyways really curious to see where this leads!

@Henry_Mahnke I started working on the second order kinematics to see if they would improve the robot control. My main reason for thinking they would is that even a constant robot first-order state (constant linear + rotational velocity) can require higher-order behavior from the modules (in constant translation + rotation, each module’s velocity is sinusoidal-ish and its heading is also nonconstant, see below), unlike with something like a differential drive where you can achieve any first-order robot state using just first-order states on the left/right.

Another possible use case for the second order kinematics is to allow for commanding desired second-order robot states appropriately - I’ve seen some teams that have attempted to do this, but made the (somewhat reasonable) implicit assumption that the equations for accelerations are the same as the equations for velocities, which in this paper I’ve shown is incorrect.

After posting this, I was introduced to team 2363’s trajectory optimization work. I was under the impression that it only solved for the first-order inputs to the robot, but the optimization also uses a physics model to compute the forces needed at each module. Combining the instant velocity and force at each module can yield the voltage needed at that module in any given instant, which in effect would give the same result as using the second order kinematics (which would instead use the acceleration + velocity → voltage). While 2363 didn’t directly use second-order kinematics, this gives me hope that second-order kinematics could be used to provide a more robust open-loop baseline for the robot in both auto and teleop, similar to what the 2017 drivetrain characterization did.

I’ve so far been a little disappointed in the simulated results and would not recommend widespread adoption of second order kinematics until they get empirically validated (which 449 is working on). I posted this in the meantime in the hope that someone who knows more than I do could build further off of it.

@warren.reynolds1 I’m not sure how to multi-reply so I’m tagging you instead. For reference, here’s the skewing issue we’ve been observing. Driver is trying to drive parallel to the length of the carpet, but rotating sends the robot way off-track.

I’m perfectly fine with fudge factors, but I think it’s good to see if there are ways to improve the baseline and reduce the amount of work the fudge factor (or preferably a closed-loop controller) has to do. This is a similar idea to what the drivetrain characterization did in 2017, improving the open-loop baseline to the point where the closed-loop just has to apply some finishing touches.

Another concern I had with the fudge factor approach is that the fudge factor is a function of your modules’ response times, so any changes to the robot or to the PID constants would require remeasuring the fudge factors. I’m not sure if this is a common experience, but I’ve seen 449 constantly mess with PID constants.

The fudge factor would also probably be a function of your desired linear + angular velocities, and I don’t have a robot on hand to see what sort of relationship might be there (linear? arctan?).

Computationally, the second order kinematics for constant inputs (like in teleop) are only 8 multiplications per module, and 16 per module for second-order inputs (which we might want in auto), so I’m not too worried about the computational complexity. It’s well within the same order of magnitude of the first order kinematics.

Finally, I don’t think it’s fair to try to limit the theory available to just what high school students know (not to mention that WPILib and Sysid already include things that I still don’t understand even in college lol). All of the derivations in the second order kinematics echo things that I learned in high school in the curvilinear motion unit in calculus (which could maybe have even acted as a real-world connection of the theory I learned in math class the other day), and even beyond that, I’ve spoken to several high school students recently who are more than capable of keeping up with this sort of math and more.

8 Likes

You can think of the result of the pose exponential as being exactly the “fudge factor” you’d need to apply to correct for drift in a swerve system with perfect tracking and no delays - it just compensates for the fact that it’s a discrete control system so a single constant setpoint has to be commanded per control cycle (whereas ideally the setpoint would be moving continuously in time).

If you wanted an additional “fudge factor” to account for delays in tracking you could certainly do that too, though I’d probably explore using feedforward terms in the control loops before adding fudging to compensate for delay.

In practice I think it’s a question of the implementation details of your low level control loops. We like to use the Falcon internal control loops because they operate at 1ms with negligible delay and we cannot get anywhere close to that in a control loop on the roboRIO due to CAN

(latency and bandwidth) and CPU (both raw CPU usage and timing jitter - we use Java) limits. As of 2022, this generally means running a velocity loop (drive) or position loop (steering) on the Falcons and then a slower (~100Hz) loop on the roboRIO to compute the kinematics and send out setpoints. Acceleration loops (e.g. time-varying velocity setpoints) aren’t a thing on Falcons as of 2022 unless you use the trajectory API, which requires very precise timing on the roboRIO side to avoid under-runs while also being responsive to new inputs.

4 Likes

I hadn’t looked at that API for a long time but I see what you mean on timing. It would be a nice enhancement if the user could buffer a series of indexed/timestamped trajectory points to the ctre device (e.g. 2x the number of points needed to cover one RIO control loop) and then overwrite the device buffer every RIO control loop using the indices for matching. Could open the door for a lot of nice control despite the RIO timing jitter.

Cool plot, this illustrates your conclusion very nicely and the module speed graph reminded me a lot of a trochoid. I think the position of the module over time might even be one

A somewhat anecdotal data point. With ideal module response (basically just feeding the chassis speeds back to the odometry) I found that adding (rotVel * dt)/2` to the pose heading when converting to field relative eliminated the skew (rather, reduced it down to 1 femtometer per second, which I’m blaming on timing jitter). Noting that the heading, as the integral of v, is 1/2 v^2 +C, I think this math checks out, but I don’t know how this interplays with discretization. This creates ideal chassis speeds, but does nothing to address actual module delay. So it’s kind of a fudge factor, but it’s basically factoring rotational velocity in the field-relative conversion.

1 Like

The reason for this is that the pose exponential inverse operation sets

v_x = \frac{1}{(\frac{\sin(\theta)}{\theta})^2+(\frac{1-cos(\theta)}{\theta})^2}(v_{x0}\frac{\sin(\theta)}{\theta} + v_{y0}\frac{1-cos(\theta)}{\theta})
v_y = \frac{1}{(\frac{\sin(\theta)}{\theta})^2+(\frac{1-cos(\theta)}{\theta})^2}(-v_{x0}\frac{1-\cos(\theta)}{\theta} + v_{y0}\frac{\sin(\theta)}{\theta})

Where \theta = \omega\cdot dt and v_{x0}, v_{y0} are your desired velocities, which simplifies to:

v_x = \frac{\theta}{2(1-\cos(\theta))}(v_{x0}\sin(\theta) + v_{y0}(1-\cos(\theta)))
v_y =\frac{\theta}{2(1-\cos(\theta))}(-v_{x0}(1-\cos(\theta)) + v_{y0}\sin(\theta))

then to:

v_x = (v_{x0}\frac{\theta\sin(\theta)}{2(1-\cos(\theta))} + v_{y0}\frac{\theta}{2})
v_y =(-v_{x0}\frac{\theta}{2} + v_{y0}\frac{\theta\sin(\theta)}{2(1-\cos(\theta))})

For small \theta, these are approximately equal to
v_x = v_{x0}\cos(-\theta/2) - v_{y0}\sin(-\theta/2)
v_y =v_{x0}\sin(-\theta/2) + v_{y0}\cos(-\theta/2)

Or a rotation of -\theta/2.

10 Likes

Disclaimer: This post was written by a human and is prone to error. Take the following information with a grain of salt (also feel free to point out any mistakes!)

No one asked but…here are the equations for the following picture posted by OP:

The graphs appear to be for the module positioned at (x,y)=(1,1) relative to the center of the robot. We can either first convert the field-relative robot velocities to robot-relative robot velocities (could be better phrased), then find that specific module’s robot-relative velocities, or the other way around. I’ll do it the first way so I can write more latex.

So we have

\begin{pmatrix}v_{mx}\\v_{my}\end{pmatrix}= \begin{pmatrix}1&0&-r_y\\0&1&r_x\end{pmatrix} \begin{pmatrix}v_{x}\\v_{y}\\\omega\end{pmatrix}

(for context, check the OP’s paper for the variable naming convention )
Since \omega is constant in our scenario we can do

\begin{pmatrix}r_{x}\\r_{y}\end{pmatrix}= \begin{pmatrix}cos(\omega t)&-sin(\omega t)\\sin(\omega t)&cos(\omega t)\end{pmatrix} \begin{pmatrix}r_{0x}\\r_{0y}\end{pmatrix}

where (r_{0x}, r_{0y}) is the initial position of the module relative to the center of the robot and t is the time elapsed since the motion began. So now

v_{mx}(t)= v_x-\omega(r_{0x}sin(\omega t)+r_{0y}cos(\omega t))\\ v_{my}(t)= v_y+\omega(r_{0x}cos(\omega t)-r_{0y}sin(\omega t))

To make the sinusoidal shape of these functions more apparent, we use some trigonometric magic to get

v_{mx}(t)= v_x-\omega \sqrt{r_{0x}^2+r_{0y}^2}cos(\omega t-arctan(r_{0x},r_{0y}))\\ v_{my}(t)= v_y+\omega \sqrt{r_{0x}^2+r_{0y}^2}cos(\omega t-arctan(-r_{0y},r_{0x}))

Now what we have is the field-relative velocity of the module, so we still have to left-multiply by the rotation matrix to get the robot-relative velocity of the module. However, this doesn’t matter when we are trying to calculate the magnitude of the module velocity. Furthermore, we can subtract by the heading of the robot to get the robot relative heading of the module.

|\vec{v}_m(t)|=\sqrt{v_{mx}^2(t)+v_{my}^2(t)}\\ \theta(t)=arctan(v_{my}(t),v_{xy}(t))-wt\mod{2\pi}

Where |\vec{v}_m(t)| is the module speed and \theta(t) is the heading over time. Here are the spectacular graphs if you wanna take a look at them more closely : (Swerve Module Speed and Heading | Desmos).
Here is the test code if you wanna see the graphs that way (run it in the OP’s project): test.py (1.7 KB)
And here they are one last time if you’re on a hurry:

Also, afaik i dont think there is anyway to pass a constant acceleration to the motor controllers. I tried to calculate the change in position of the robot using average velocities for the modules, though I didn’t get too far in interpreting the result. Did anyone figure out an implementation yet? Post a video of it running on a robot too if you don’t mind!

8 Likes

Also, afaik i dont think there is anyway to pass a constant acceleration to the motor controllers.

The closest thing I can think of is to include the desired acceleration in the feed forward, though that isn’t actually a constant acceleration since acceleration goes down as velocity goes up. It might be close enough, and you could maybe also calculate what initial acceleration you should command so that over the time step, the velocity ends up increasing by your desired acceleration.

I tried to calculate the change in position of the robot using average velocities for the modules, though I didn’t get too far in interpreting the result.

I was wondering how someone might go about this, but then I found out that WPILib 2023 is updating the pose estimator to use the change in position of the modules over each time step rather than the velocity, since that is much more resistant to sensor delay and noise. I haven’t looked into their implementation yet, but it might be worth studying.

2 Likes