Quote:
Originally Posted by JesseK
Hmm. Is there a way to experimentally determine the gyro drift as a function of time (or amount of degrees already turned...) and use that as an error adjustment during a match? Or is the drift non-constant?
|
Well, it's tricky.
Integration drift arises because we are integrating a noisy signal. In other words, every time we read signal X, we are actually reading a random variable which consists of the true X, plus or minus a noise term with a given mean and variance.
If Xobserved is what we see, the effect is:
Xobserved = Xactual + Noise with mean and variance (usually modeled as a Gaussian distribution)
Over time, all these small variances result in a "random walk" in position - you can't recover exactly what the variances were, but you
can estimate the uncertainty of your current position estimate.
In other words, you can estimate (with reasonably high confidence), the variance of your current position reading, but you won't know where inside of this error bound you actually are.
In industry, we often try to augment fast measurement sources that drift (like gyros and accelerometers that get integrated) with slower, but stable, measurement sources (like compasses and GPS). Then, you get the best of both worlds.
(Though getting a compass to work on a metal FIRST robot is a challenge in itself).