Question about AnalogInput/oversampling

Hi CD,

Quick question: How does the AnalogInput object’s averaging work?

I get that the setAverageBits() method sets the number of samples to average, but I can’t figure out what setOverSampleBits() does. I’ve dug through the AnalogInput screensteps, but the explanation isn’t really doing it for me.

If it helps, we are using a Sharp IR sensor for distance measurements, and we have some noise in the readings which could be easily filtered out.

You can find your answer in the WPILib code (specifically the docblock comment above the Class definition). If you want to dig through it further. You can find the C++ code that AnalogJNI calls into in the results of this search

Hopefully this helps.

I’d be interested to hear an “FRC-friendly” explanation as well.

I know there’s at least two sample rates involved - the 500kb/s rate of the ADC itself, and the (presumably) 20ms period (50Hz) at which the code reads the value of the voltage.

Oversampling appears to be an additive thing (where the ADC/FPGA just add up extra samples in between queries from the main software). In my mind, averaging at this 500kb/s frequency should do the same thing, neglecting possible rounding effects from integer math.

However, if averaging is done at the 50Hz period of code read (or any other period), then the documentation comments make sense - at least how they describe how the signal will be more stable, but change more slowly (since a rolling average filter functions like a low-pass filter, sorta).

EDIT: Fletch FTW with the key comment to explain it.
Still not clear on how/why to use each though. Historically we haven’t needed anything near even the 50Hz sample rate of the software, so I’ve never dug into it…

Sounds like the answer in your case is to set the number of averaged bits as high as you can while still keeping the response time reasonable?