Recently I’ve been trying to get through bore encoders to work with robot characterization, despite knowing they cause a lot of noise and therefore bad data and have had an increasing amount of trouble. I’ve changed the encoding type to k1X and changed the samplestoAverage to 7, 25 and 50 only to find the same issues arising making the data look like the attached image. We’ve even tried changing the mode of the encoders to incremental and we can’t seem to understand how it works. Is there anyway to get these encoders to work and show at least reasonable data for characterization or would it just be easier to change out the encoders all together?
Could you post your characterization data JSON? I want to see if Use new OLS algorithm for all but single-jointed arm by calcmogul · Pull Request #65 · wpilibsuite/sysid · GitHub does any better, because the velocity data looks clean enough.
This is news to me, which through bore encoder has a lot of noise? Is this the SRX mag encoder or the REV one?
Not denying it just want to know if this is something to look out for.
This is with the REV through bore encoder.
Yes, here is the JSON file, I have to use a google drive link as chief delphi doesnt support JSON files.
Here you go!
We’re working on a new characterization tool with a better fitting algorithm. I’ve found that setting the motion threshold kinda high helps get rid of a lot of noise that biases the fit.
Thanks so much. So the values for Ks/Kv/Ka in your analysis should be good to use? Your new analysis tool did seem to eliminate the vertical dots/noise that we saw early on the quasistatic charts before. Do you have an explanation for those? Thanks again for the assist on this.
Eric (Team #1164 programming mentor)
Yea. The gains move a bit when I lower the motion threshold more, but not by much.
We saw that with sysid as well, and in that case, it was caused by not trimming the ramp-up to the max acceleration well enough. The (admittedly noisy) acceleration is from a first-order numerical differentiation of the velocity data, and we use that to figure out what to trim. We’re still looking into ways to filter that data.
This is due to a well-known property of linear regressions. Increasing the motion threshold filters out the data points with the greatest (fractional) uncertainty in the independent variable. It’s important to recall that in the regression we actually run, voltage is the dependent variable, and our noisy values (velocity and acceleration) are the independent variables.
You can also improve the fit by trimming the large period of time over which the robot is just running at max speed from the acceleration data (either by running the test again and stopping the robot sooner, or else by manually removing the extraneous data). Perhaps we should add a feature for this? The purpose of the dynamic test is to examine the robot performance while accelerating; the longer it sits at max speed, the more regression dilution the measurement noise will introduce.
More stuff that I should probably add to the docs…
I saw similar-ish bad acceleration data when trying to characterize a Romi. Not surprisingly, it accelerates really quickly, so I could see it is hard to get good data.
Is SysID available for testing, outside of the WPILib Dev group?
The flip-side to this is that if it accelerates sufficiently fast a kA
of 0 will likely work just fine, for the same reason that it’s hard to measure kA
in the first place.
We’ve made some advances in that area recently. The new OLS algorithm we’re evaluating is more robust to this and performs the velocity and acceleration fit simultaneously. It’s fitting this equation
x_{k+1} = e^{AT} x_k + A^{-1} \left(e^{AT} - 1\right) B u_k - \frac{K_s}{K_a} A^{-1} \left(e^{AT} - 1\right) sgn(x_k)
where A = -\frac{K_v}{K_a}, B = \frac{1}{K_a}, and T is the sample period.
OLS is fitting an equation of the form x_{k+1} = \alpha x_k + \beta u_k + \gamma sgn(x_k).
The three terms form a 3x3 system of equations which can be solved for K_s, K_v, and K_a.
K_s = -\frac{\gamma}{\beta}, K_v = \frac{1 - \alpha}{\beta}, and K_a = \frac{(\alpha - 1) T}{\beta\ln\alpha}.
Here’s an example for some of 3512’s flywheel data. It performs better because it’s fitting the exponential decay behavior directly.
Flywheel characterization.csv.txt (41.1 KB) flywheel_ols.py.txt (2.8 KB)
Here’s a whitepaper and PR for it. One downside is it assumes the measurement period is somewhat constant for the K_a computation. However, I’ve seen that for systems with fast dynamics like that flywheel, inconsistent scheduling causes lots of velocity measurement noise anyway, which broke the old OLS algorithm even worse since the acceleration computation just amplifies the noise.
Here’s how a fit for u = K_a a + K_v v + K_s sgn(v) does for the same data.
flywheel_ols_old.py.txt (2.7 KB)
For feedforward, yes, but definitely not for feedback. A K_a of zero implies the input maps directly to the target angular velocity, so the optimal feedback gain is also zero. This makes the system unable to react to disturbances.
The old OLS model does this, too (it’s a multiple regression), but modeling the transient behavior is super nifty. I’m impressed.
I guess I meant it uses data from one run instead of separate fast and slow ramping runs. But yeah, the tool just sticks the two datasets together for the OLS anyway.
Even if the system is fully determined with only one run, it’s good to run both a quasistatic and a dynamic to sample the velocity-acceleration plane along two different lines. Sampling only along one line is going to be much less-reliable, I think.
Yea. The new tool still does both tests because more data can’t hurt.
Yea, you can clone it from GitHub - wpilibsuite/sysid: System identification for robot mechanisms and play around with it. The new OLS algorithm hasn’t been merged yet because the macOS integration tests fail due to the CI machines having horrible OS scheduling (on the order of 100s of milliseconds of jitter); the new algorithm is more sensitive to that than the old one.
Now that we have started plugging these values into ramseteCommand we realized that we did the characterization wrong and did it in inches rather than meters which is needed. Is there anyway to convert these constants that you gave us into meters or is there any way you could run a new JSON file through the OLS algorithm? Here is the JSON if it is easier to do it that way. Thank you in advance.
If you know what units they are, like V/(m/s), you can easily convert to another distance unit. Here’s the gains for your new JSON tho.
Thanks so much for the very quick response and assist!