Integrating the Accelerometer

I was wondering if anyone has had success integrating the accelerometer output to obtain velocity.

For programming this in LabVIEW, does anyone know of any ways to maintain a consistent dt? Or any way to smooth out inconsistencies in the accelerometer output?


For maintaining a constant period I’d suggest using a Timed Structure.

Here’s a brief white paper illustrating the differences between different ways of timing a loop:

Can I ask how you are going to use the velocity?

Is it for a transmission, or autonomous, or something else?

Thanks Mark, that’s really helpful.

It would be for autonomous.

You can use the gyro library to take advantage of the integration built in to the fpga.

Actually, would it be too ridiculous to double integrate to get displacement? Or would too much error rack up.

It depends on the accuracy you want, within a foot or two, I suspect it may be okay, anything less probably not.

Can I ask how well you did in autonomous in '10?

Right now I’m assuming your idea is to tell the robot to drive forward 10ft rather than the typical drive forward 10 seconds

I’ve tried double integration of accelerometer data for my job (design engineer at automobile company.) The integration was done after a data acquisition, not on-the-fly. I get very inconsistent results depending on the size of any zero-offsets. Consequently for measuring small distances (at 60 Hz or higher) I tend to go more for string potentiometers or laser gaging. I was trying to measure small displacements though, so the prior poster may be spot-on for distances on the scale of 1 foot.

For measuring the larger distances of this game, you may want to go for an infra-red or sonic range finder. Those are allowable while laser is not.

For autonomous last year we simply used dead reckoning. We tuned it in using trial and error. The controller is programmed to apply “X” volts of power to the drive motors for “Y” seconds. This is easiest for driving straight ahead, but you could program some turns in too.

For measuring distance driving forward, you will likely get far better data with less work by using encoders on the drive gearboxes than you will from double integration of accelerometer data.

Assuming you are driving with wheels, encoders can get you almost exactly where you want to go. They measure the exact rotational travel the wheels move, and from there you can calculate your exact displacement and velocity. They (along with the gyro) are my favorite autonomous navigation sensors.

Edit: Beaten to it.

Or use the camera so location is never an issue. Encoders and gyros require correct orientation of the robot from the start. This year there is nothing to square your robot against. For this game, I think that a compass may be helpful to zero the gyro at the start. Then encoders should be sufficient for driving straight for a certain distance.

Thanks for the feedback.

As far as encoders…I don’t really know how well those will do because we are using mecanum.

FYI, dead reckoning includes navigation using any measurements that are relative (i.e. encoders and gyros). See here.

A new feature of the CAN Jaguar will make the approach you used last year more repeatable than it used to be without the dependency on wiring and mechanically attaching encoders. The mode is called “Voltage” mode in the CANJaguar implementation. It allows you to specify the actual voltage you want to output to the motors, instead of the percentage of the input voltage (the way PWM works). This means that even as your battery voltage varies based on less charged batteries or different aged batteries, etc, the Jaguar will compensate for the input voltage and keep the output the same each time. This does mean that you should avoid requesting voltages from the Jaguar that will only exist in a fully charge battery (such as 12.5V).

You can also use this mode with RobotDrive by configuring the MaxOutput parameter.


I have the same question. Will encoders work for measuring distance while using mecanum wheels?

I’m also worried about having the robot correctly oriented so that gyro angle zero is exactly perpendicular to the front of the field. Otherwise, we wouldn’t get perfect results… is this going to be a huge problem for autonomous?

They can work, but they are much less tolerant of the robot from not being flat and lumps in the carpet, etc. Your driver will naturally compensate for these things, but the sensor just measures what’s really happening.

Also note that the encoder is measuring the wheel rotations, which for mecanum wheels does not directly translate into distance = circumference * rotations. You will have to invert the mecanum control equations to make sense of the 4 wheel position channels.

It would probably be easier to put encoders on two unpowered omni-wheels that are 90 degrees from each other.

Something similar to