Quote:
Originally Posted by jhersh
Another thing you might try that has been very effective for me is oversampling the value before accumulating. Oversampling is much like averaging, except that you don't divide. It's easiest (just like averaging) if you work with powers of 2. Let say you choose to add 4 bits to your measurements by oversampling. That means that for each sample you want to accumulate, you need to actually aquire 2^4 or 16 samples. By doing this, your center value that you subtract off of each value before accumulating can be more accurate.
|
Joe,
I agree that oversampling to increase the resolution could be a good way to reduce the drift in Gohan22's integration. Integrating a 1-bit error in an 8-bit signal is much much worse than a 1 bit error in a 12-bit signal.
That said, your math is a little off, Joe. Acquiring more resolution from a given signal is a process of oversampling and decimation. The decimation part is important, as half the extra information you acquire is still just noise. Using your method, accumulating 16 samples does increase the accumulated total by 4 bits over the original value. But the last 2 bits of the value are still just so much noise.
To actually oversample a signal to increase resolution, you need to sample at 4^n times your original rate, accumulating the values you're sampling, then right shift the result by n bits. To get 4 more bits of resolution, you need to sample 256 times faster. There's some real important caveats here, though.
First, your signal HAS to be noisy for oversampling to work. If Gohan22's signal from an 8-bit ADC is nice and stable, then oversampling won't get him anything. If your signal is stable at 127 when the real value is 127.4, then adding 127 a bunch of time isn't going to get you any more information. So oddly enough, I'd prefer Gohan's slightly noisy 10-bit ADC result, as there's more information to be extracted there.
The other caveats are a little more subtle. The noise has to be random in nature, or at least appear so to you ADC. If some of the noise is from a timer that increases your reading every 3rd sample, you'll get skewed results. Also, the signal should be be relatively stable in your oversampling window. Similar to the last, if the signal is meaningfully fluctuating during the oversampling window, those meaningful fluctuations are going to disturb your average. Plus, if it's a meaningful fluctuation, you're not sampling at the Nyquist frequency and you'll be ending up with an aliased signal anyways.
Finally, if your oversampled and decimated signal is still noisy, you can always further decimate it until it stops being so. You can also sample even faster to make up for these extra decimations and maintain your target sampling rate.
So! In conclusion to a long winded post, I think Gohan22 should:
- Dump the 8-bit ADC and continue with the 10-bit.
- Determine what the likely bandwidth of his signal is.
- Plan to sample at no less than 2 * that bandwidth
- Oversample his signal as much as feasible. I'd think +2 bits of resolution is reasonable. Plus a little additional oversampling to average out the noise if it's still necessary.