Drivebase Simulation Example with TalonSRX Encoders

Hi everyone!

I have been trying to modify the wpilib simulation tutorial to get it to work with encoders plugged into the TalonSRX instead of the roboRIO. I am having some trouble, right now the simulation autonomous teleports around and does not follow the path that the original tutorial did. I’m still pretty new to autonomous as well as simulation so please forgive me if its something obvious but I hit the “lets run it again without changing anything and see if it magically works” phase and figured I could use some help.

My guess is that it is a conversion error or a something with the update frequency of the talons not matching the roboRIO.

Code: GitHub - frc6995/NOMAD-Base-2020 at krenDrivebaseSim
See robotContainer, subsystems/drivebaseS, and both constants files ending in ‘sim’

Thanks in advance for the help!


1 Like

I am not by any means an expert at this yet but I am a little further than you. Here is the code I have been working on.

It doesn’t look like you are using the new CTRE simulation stuff here is their example

1 Like

From some quick debugging, it looks like the sensor inputs are being delayed from the simulator into the Talon SRX sim collection. Here, I plotted the velocity coming from the simulator vs. the velocity from getSelectedSensorVelocity():

You can see that the Talon SRX is reporting a significantly delayed velocity measurement, causing the instability in the trajectory tracking. I’ve tried setting multiple status frame periods but I haven’t had any luck so far. The only way I got good trajectory tracking was when I used velocities directly from m_drivetrainSimulator instead of the Talon SRX but this is less than ideal and not the intended way to use the simulation framework.

The CTRE PhysicsSim class is intended to roughly mimic the response of a motor when connected to a load for basic testing. It is not intended to be used alongside DifferentialDrivetrainSim (or any other sim physics class) which will actually produce accurate positions and velocities based on a physics model of the system.

I saw the same thing when I was doing my wrappers. The sim code doesn’t seem to be deterministic, which made it very hard to test, I had to put sleeps in my unit tests to get things working, and I also saw cases where the left and right sides wouldn’t even be in sync, so running a differentialDrive.arcadeDrive(1, 0) wouldn’t drive perfectly straight (with the noise matrix turned off).

My assumption is that each controller is running in its own thread, so weird race conditions will cause your robot to do weird things.

I’m able to reproduce what you have in your plots.
I don’t have a solution currently, but we’re looking into it.


Right, so the root cause is the velocity filtering that happens in hardware:

Normally this isn’t something easily observable because you’d have to plot the raw Talon sensor over the wire against the value from the CAN bus.

You can change the filtering configs either through the API as normal or in Tuner (localhost) while running sim. If you’re simulating sensor noise you may want to keep the filtering (which is why it exists for real sensors in the first place).

1 Like

What was your solution? I tried changing the window size and sample period but I couldn’t get the trajectory following as smooth as the original wpi example

1 Like

Changing the filtering configs should improve the behavior greatly - however I’m still able to reproduce a small delay with the window size and sample period both set to 1. We’re looking into what’s causing this so it can be fixed.

1 Like

As a heads up, we’ve cut a development release that has the fix for this and another minor issue that was reported.

To use this dev build you can do an online vendordep install from within vscode. The json url for the build is:

In general, our dev release jsons moving forward will be located here:

1 Like

Hi. I’m a controls mentor for Team 2377, C Company. We worked all weekend on getting simulation running with CTRE Mag encoders and a NavX gyro so I we can work on the challenges without physical access to the robot. This thread was extremely helpful, so I thought I’d add a sample of code that seems to work pretty well in case it helps anyone else working through this. The motion is still a little rough and we can probably change the window size and sample period to get a smoother output, but it’s a start. Thanks to all who posted here.

1 Like