Units for Spark MAX + NEO encoder and state-space control Kalman filter correction

I was following the WPILib state-space tutorial for flywheels, but I’m confused about the units the Spark MAX + NEO encoder returns and the units the Kalman filter correction takes.

Code from tutorial:

// Correct our Kalman filter's state vector estimate with encoder data.

I am unfamiliar with the encoderget.Rate() method. Is it comparable to sparkMAX.getEncoder().getVelocity]? I believe the Spark MAX defaults to rev/s. What sort unit conversion do I need?

Modified code from Outtake.java:

// Correct our Kalman filter's state vector estimate with encoder data.
            .getVelocity())); // .getRate() // TODO: What unit?  This might be in rev/s

According to REV’s Java API Docs, The RelativeEncoder class returns RPM, Revolutions per Minute, by default for getVelocity() , but you can supply a conversion factor to modify the

I’m not super familiar with all the State Space and Kalman Filter options in WPILib, but the full example provided in the Docs seems to have everything in Radians per Second.

If you’d like the SparkMAX Encoder to return Rad/s, you can set the conversion factor to 0.10472, or just multiply the RPM by that same conversion factor. Which I found using Convert revolution per minute to radian per second - Conversion of Measurement Units.


I’ve been using WPILib Units class for conversions instead of actual conversion factors, although I suppose that’s more stuff to import.

Using the Units class is the way to go about it. It makes the code much more readable than just having a “magic” constant there

For the actual topic of discussion, I am not sure the built in encoder on the Neo is going to work well for state space velocity control because of the 120ms measurement delay built in to it. I’ve actually been wondering about it myself. Would the Kalman filter for it work well if you just gave the velocity measurement a large standard deviation @calcmogul ?

1 Like

Not really. You have to mostly trust the measurements over the model because balls going through the shooter is an unmodeled disturbance. If you trust the model, the velocity estimate won’t drop nearly as much as it should.

1 Like

Ok that makes sense. Would manually estimating the velocity using the position data to avoid the delay work? Or are the neo encoders and state space basically incompatible?

I’ve seen that workaround work OK on the FRC Discord. You’ll have just the 20 ms CAN status frame delay instead of that plus the filtering delay.

Ok and what method to find the velocity should you use? I feel like

velocity = (position - prevPosition) / 0.020

would be really noisy, right? Is there a simple way that’s better than this?

3512 (my former team) used that method with a quadrature encoder plugged into the roboRIO, and did the sampling on a real-time thread to reduce noise from a varying sample period. There was also a 4-tap moving average to filter out quantization noise. It worked well.

1 Like

Did the quadrature encoder you were using have a velocity delay or why did you have to use your own filter?

I’m not too familiar with threads, does this just mean making a separate thread from the rest of the robot so that if something else on the robot is taking a long time then it won’t affect your velocity calculation?

so calculating the velocities during the last 4 loop periods and then averaging them all together? Did you test different numbers of periods to average and found 4 worked the best or what?

No. That’s why we used a quadrature encoder plugged into the roboRIO, and ran our controls on the roboRIO. The filter was just to smooth out quantization noise. The position has a finite resolution, so the velocity will develop noise on the order of distance per pulse / sample period.

No, we just made the main robot thread real-time. It requires being more careful, but it made things more deterministic and less noisy, which let us satisfy our flywheel performance requirements.

Yea. Just use LinearFilter::MovingAverage().

I tuned it in simulation since the only noise there is quantization noise introduced by the HAL encoder count. Four taps was the minimum that filtered out the noise. Any more would just introduce more measurement delay (a 4-tap moving average sampled every 5 ms has (4 - 1) / 2 * 5 ms = 7.5 ms of measurement delay). It’s a tradeoff.

The lagless way to reduce the quantization noise would be using a higher count per revolution encoder. Ours was set to 512, so in retrospect, that could have been increased to 2048. I just didn’t think of that until just now :sweat_smile: (https://www.digikey.com/en/products/detail/cui-devices/AMT103-V/827016). The upper limit is determined by how fast the FPGA can capture the encoder edges.

I suggest doing your encoder conversions in your code, not on the SparkMax. That way you can actually see what you are doing and comment it. Our programmer found that the internal conversion was broken. Your mileage may vary on that!!! I can’t claim direct knowledge or even certainty there…

How does this work? Would the effects be that different than just recording the time that I sampled the encoder at?

Do I just use TimedRobot's addPeriodic() method to sample every 5 ms? I am using command-based if that matters

We saw way more noise even though we were sampling the time. My theory is the OS was interrupting the robot program between when the encoder was sampled and when the time was sampled, so they didn’t match up. When the thread is RT, it won’t get preempted until it decides to stop running, or a higher priority thread becomes ready to run.

Yea, that’s the easiest way to do it that doesn’t require multithreading. Java can’t hit 5 ms timing though like C++ can. Maybe 10 ms.

Dang. Can you point me in the direction of a guide to make the thread real-time? Or some example code? Is it just using Notifier? Thanks

You can use Threads (WPILib API 2022.4.1) to make the main robot thread real-time. Be very careful though. If you write indefinitely blocking code on accident, your roboRIO will lock up and you’ll have to boot into safe mode to delete the robot binary.

Try to fix your mechanical problems in hardware before resorting to fancy software. 3512 had to use software because the pandemic severely limited their production capacity, so prototyping a shooter that was easier to control wasn’t an option.

No, it’s not using Notifier. 3512 did cooperative multitasking on the one main robot thread using TimesliceRobot (WPILib API 2022.4.1) (well, the prototype version that eventually became WPILib’s TimesliceRobot). It’s a convenience class built on top of TimedRobot.addPeriodic().

1 Like

Could you elaborate on a shooter that’s “easier to control”?

I think our shooter design is decent, I’m just trying to get away with using the neo internal encoder for the flywheels so that we don’t have to wire external encoders all the way to our turret

One with more mass will accelerate slower, which means measurement latency will affect it less, and the sample rate can be lower.

Yeah I figured. We wanted to have a low moment of inertia on our shooter so that we could change speed really fast to spit out the other teams ball relatively slowly.

Sorry for all the questions, but do you think I would get decent results with just the 4-tap moving average sampled every 10 ms and then fed into the state space flywheel, without having to do all the complicated thread stuff?

I don’t know. It depends on your particular robot.