We’ve been trying to get the accelerometer up and running and we can get some fairly accurate data, but it seems like the kickback from stopping far exceeds the acceleration in getting the accelerometer moving in the first place.

For example, let’s say we moved it at -2 ft/s (aka 2 ft/sec backwards), it would give an approximate reading that seems appropriate, but then when we let go, it suddenly launches up to .5 ft/sec forwards. The more it moves about, the farther away from 0 the velocity ends up.

So I did a bit of investigating and found that when tipped so that it was at a 90 degree tilt to the ground, one side gave a mV reading (with the 2500 taken out) of 327, and the other -317. I thought this might be an error of bias (maybe it was a little off the 2500?), so I used the gyro bias finding calc, and looked at the raw again. Even at it’s best, it didn’t seem to have the same values for forward and backward.

So we’re wondering if there’s some good method by which we could eliminate the error. Currently we added a deadzone that says if the accelerometer is at zero, and the velocity is less than .5 ft/sec, it should just zero the velocity, but of course this makes the sensor less accurate. And if our robot is moving at a constant velocity it will cause errors when it stops.

The best I could do was make the integration happen as often as possible, so I modified some accelerometer code and the interrupt for the ADC integrates at the same time.

Here is the code.

Any suggestions would be greatly appreciated.