The other day i was doing some research on the Kinect sensor and found out that it has a 3-axis accelerometer in it. I’m thinking that it would be a neat project we could discuss/accomplish in this thread if we were to use the accelerometer data to calculate the “x” movement on the field and “z” movement on the field.

Since we don’t really need “y” values, we can toss them out. We can use a time interval of about 100ms to calculate distance traveled with this equation: D=1/2at^2 where a is the accelleration and *t *is the time interval. After that, the only problem would be displaying a small diagram on the LV dashboard that has an aerial view of the field and a scale model robot that changes position every 100ms on the field diagram, based on the change in distance of the “x” and “z” values.

… and that formula is valid only if “a” is constant over the time interval.

Hate to be a party pooper, but… Due to the double integration, tiny errors in “a” will rapidly accumulate so that the computed position will quickly diverge from the true position.

Trapezoidal integration will help somewhat: Given t, x, v, and a at some point in time, and a[sub]new[/sub] at some later point in time t[sub]new[/sub], proceed as follows:

dt = t[sub]new[/sub] - t;
v[sub]new[/sub] = v + dt*(a[sub]new[/sub]+a)/2;
x[sub]new[/sub] = x + dt*(v[sub]new[/sub]+v)/2;

Great idea,
but how is the kinect related to this?
Can’t it be done with the KOP accelerometer too?

BTW in what units is the output of the accelerometer? m/s^2, etc…

We use the Kinect fro vision processing due to the depth capabilities. The kop acceleromater should work too! You can read about what data the kinect gives here.

So, theoretically, could you accurately plot a robot’s location on the field using the equations you gave? and what other variables would I need get as inputs besides time and acceleration?

What if the time variable was such a very short amount of time, like 50ms, and you just took the average acceleration over that time period?

Shorter integration times are generally better. You will still see some drift due to the double integration and the fact that the measured accel has error.

…and you just took the average acceleration over that time period?

That’s what trapezoidal integration tries to do.

Play around with it in a spreadsheet or Maxima or Octave or SciLab and see for yourself.

If you want to play with this, it may be worth checking to see if your laptop has an accelerometer in it. I know that my last Mac laptop did. It was used to park the hard drive before it landed during a fall.

If so, you can read that accelerometer in LV and play a bit.

I looked on the Apple app store and there are a number of free apps that let you see the accelerometer data and let see their interpretation of the data.

As for using this on the robot, the key cases to think about are the ones where the floor is uneven. Sitting still, the robot and its accelerometer are tilted. Gravity’s 1g is no longer in the pure Z direction. It is imparting a force on the other axes. If your code doesn’t calibrate and ignore this, it looks like the robot is constantly accelerating in the uphill direction. If you do calibrate it for that tilt, then drive the robot a foot and stop it again, the floor is likely tilted a new direction. Once again, sitting still, you will accelerate uphill.

It is an interesting problem, and accelerometers are certainly useful on the robot, but integrating them to identify speed or location without isolating them from the force of gravity is quite hard.

Do you have any examples of how to get that built-in accelerometer in LV? we’re not using any apple products, but it may be worth it to just check and see if we have one…

As for the tilt, if we had a gyro on there, and re-calculated the “down” direction whenever we weren’t level, would that help fix the tilt problem? I mean, this year, the floor was totally level, but i guess it would go nuts if we get tipped or something.

Also, I talked to my Physics teacher and he said that using this equation for each axis would work:
d=1/2at2+V[sub]o[/sub]t
But we would have to get “v[sub]o[/sub]” from the equation:
V[sub]f[/sub]=v[sub]o[/sub]+at

Also, we talked about the case where the robot is turning, putting acceleration on the axis tangent to the curve and he used these few equations:

The use of a[sub]c[/sub] = (V[sub]t[/sub]2)/r and the other equations only apply if the robot is moving in a circle. I do not know the skill of your drivers, but most drivers I have seen do not drive in circles. The V[sub]t[/sub] refers to the velocity of the object tangential to the acceleration.

The w[sub]f[/sub] = w[sub]0[/sub] + xt is similar to v[sub]f[/sub] = v[sub]0[/sub] + at. It is the rotational velocity of a body under constant rotational acceleration. x is the rotational acceleration in radians/(s2).

In circular motion, velocity, acceleration, and position can be related to their rotational analogues by dividing by the radius.

Another interesting idea (that may be completely wrong) is to use a gyro with the forward position to create a set of vectors that might be used to find position in a polar system.

*The code for trapezoidally integrating an acceleration to get distance was given in this thread in an earlier post.

If your acceleration is in a plane (the plane of the floor), use the same concept to get your position in the plane:

Given t, x, y, v[sub]x[/sub], v[sub]y[/sub], a[sub]x[/sub], and a[sub]y[/sub] at some point in time, and a[sub]xnew[/sub]a[sub]ynew[/sub] at some later point in time t[sub]new[/sub]*,
compute v[sub]xnew[/sub] v[sub]ynew[/sub] x[sub]new[/sub] and y[sub]new[/sub] as follows:

dt = t[sub]new[/sub] - t;

v[sub]xnew[/sub] = v[sub]x[/sub] + dt*(a[sub]xnew[/sub]+a[sub]x[/sub])/2;
x[sub]new[/sub] = x + dt*(v[sub]xnew[/sub]+v[sub]x[/sub])/2;

v[sub]ynew[/sub] = v[sub]y[/sub] + dt*(a[sub]ynew[/sub]+a[sub]y[/sub])/2;
y[sub]new[/sub] = y + dt*(v[sub]ynew[/sub]+v[sub]y[/sub])/2;

… where x,y is the location of the accelerometer in the fixed plane of the floor. Note that you will have to convert your accelerometer signal from the vehicle reference frame to the fixed x,y reference frame of the floor, using the gyro to do the coordinate rotation.

As stated earlier, the errors will accumulate quickly and the computed position will diverge from the true position.

*Just to be absolutely clear for those who may be new to this: t[sub]new[/sub] is not one giant step from t. It is a very small integration time step (say 20ms) later than t. The repetition of this calculation over time is known as numerical integration.
*
*

The use of ac = (Vt2)/r and the other equations only apply if the robot is moving in a circle. I do not know the skill of your drivers, but most drivers I have seen do not drive in circles. The Vt refers to the velocity of the object tangential to the acceleration.

The wf = w0 + xt is similar to vf = v0 + at. It is the rotational velocity of a body under constant rotational acceleration. x is the rotational acceleration in radians/(s2).

In circular motion, velocity, acceleration, and position can be related to their rotational analogues by dividing by the radius.

Another interesting idea (that may be completely wrong) is to use a gyro with the forward position to create a set of vectors that might be used to find position in a polar system.

Haha, he gave the ac = (Vt2)/r equation to use in the case that we were turning, which would throw off our real location; I guarantee you I don’t drive in circles! If we updated the rotational acceleration every 10ms and took the average acceleration for that period, do you think would this be a short enough interval to be able to plot location semi-accurately?

I apoligize for not acknowledging, my friend; I appreciate your input!

Using trapezoidal integration, would that eliminate the errors? Or is there anther way to do it without the problems you describe? I’ve read that with robotic probes that go into caves and such, they use this kind of plotting system, an accelerometer and a gyro…

I suspect 10ms will be a short enough time, but that depends on a few hardware specific circumstances (gyro float and accelerometer responsiveness come to mind). Your team’s driving style may also play a role in customizing the algorithm. I am interested to see how this will turn out. Also, have you considered encoders on your drivetrain? I hazard a guess that 2 encoders and a gyro can produce a position close enough for your needs.

I was thinking the same thing, if the accelerometer proves to be unreliable (which is what it’s looking to be), encoders + a definition of how wide the wheels are can give an accurate representation of how far the robot has moved, theoretically, and the gyro can give orientation.

I was looking for a decent app on the iPhone that would work to experiment with. The closest I can come to is one called Vibration. I was using it to measure how the cell phone buzzed and compare that to an external accelerometer reading.

Anyway, the app will show the three axes and it calibrates to subtract out gravity at the initial orientation. If you leave the phone sitting still and run a five second recording, you should get relatively flat lines and that’s expected. The integrated area should be zero.

If you run the app and move the phone to the left and right, you’ll see similar cancellation. But it probably won’t quite zero. Next, during a sample recording, walk from your chair to the front door. Each step looks like a heartbeat, on each axis. And yeah, they sort of cancel out, but where is my predictor of my acceleration that tells me how far I walked. It is a tiny bump at the beginning of that heartbeat signal.

Then run a sample and simply tilt the device a bit. You’ll see that a five or ten degree tilt offsets the line quite a bit. And worse, it stays there for the entire sample. The integration of the tilt is huge.

Anyway, if you can find the app, or something similar, it is helpful in understanding why IMUs are hard. After all, if it was easy, the phone or Garmin would do this instead of or in addition to GPS.

Not with the accelerometer and gyro that come in the KoP. They’re not accurate enough. The problem is the double integration (to get from accel to position). The small errors in the accel and gyro signal get integrated. The errors accumulate. After a short period of time, your computed position drifts away from your true position. The gravity problem Greg mentioned also contributes to errors in the accel signal in the plane of interest (the floor).

Haha, he gave the ac = (Vt2)/r equation to use in the case that we were turning, which would throw off our real location; I guarantee you I don’t drive in circles!

The method I described applies to 2D motion in the plane of the floor, so it applies to turning (be it circular or not) as well as linear motion. The a=v2/r is not required.

If we updated the rotational acceleration every 10ms and took the average acceleration for that period, do you think would this be a short enough interval to be able to plot location semi-accurately?

You can answer this question by simulating a simple example in Excel or LabVIEW or Maxima or Octave or SciLab or any CAS tool or programming language of your choice. Assume you have a vehicle traveling in a perfect circle of radius R at a constant speed S. Then you know what the true ax and ay components of the acceleration are at any point in time. Do the numerical integration using those perfectly correct numbers. You should get almost perfect circular motion. Now introduce a small error into those ax and ay numbers, to reflect errors expected in the KoP gyro and accelerometer, and do the integration. You’ll probably get something like a spiral instead of a circle.

I’ve read that with robotic probes that go into caves and such, they use this kind of plotting system, an accelerometer and a gyro

I don’t know about the probes you are referring to, but if they use only an accelerometer and gyro to compute position they’re probably much more expensive (and accurate) than those in the KoP.

Or is there anther way to do it without the problems you describe?

Placed properly, 3 unpowered omni follower wheels, each with an encoder, could theoretically be used to compute both position and rotational orientation – without the need for a gyro or accelerometer. That would have a different set of problems.

If you’re using a properly-programmed true swerve drive*, you can compute position and orientation of the vehicle from the encoders on the wheels. But this introduces a different set of errors due to the dynamic response of the steering and wheel speeds in response to rapid changes in command. And all it takes to throw the computation off is one good bump that changes the orientation of the vehicle.

If you’re using a skid-steer drivetrain, the relationship between the powered-wheel encoder readings and the actual vehicle movement during turns gets muddied considerably in ways that may not be easily predictable. Probably not a good solution.

*ie independent steering and drive for each wheel, with properly programmed steering angle and wheel speed for each wheel.

While not accelerometer related, there have been a number of posts over the years describing issues with the “drift” associated with gyros over time and the error this causes when calculating field position. One method I’ve experimented with that seems to work well to compensate/eliminate most of this error utilizes two gyros. A high rate gyro for most turns (250 to 500 deg/sec), and a low rate gyro (30 deg/sec) for higher accuracy in slow curves and determining when the robot is stationary (for zero compensation). With the higher resolution of the low rate gyro, it is much easier to determine when you can automatically adjust the zero point. This method does break down when the robot is in continuous motion, but typically there are periods of time within a match (and certainly before the match starts), where the gyro can update its zero point. During bench testing, I was able to achieve a heading drift of under 2 degrees per hour when stationary. The heading calculation algorithm would automatically switch between the gyros at a 20 deg/sec rate (67% of full scale of the low rate gyro).

What about tank drive? Would it totally throw everything off?

While not accelerometer related, there have been a number of posts over the years describing issues with the “drift” associated with gyros over time and the error this causes when calculating field position. One method I’ve experimented with that seems to work well to compensate/eliminate most of this error utilizes two gyros. A high rate gyro for most turns (250 to 500 deg/sec), and a low rate gyro (30 deg/sec) for higher accuracy in slow curves and determining when the robot is stationary (for zero compensation). With the higher resolution of the low rate gyro, it is much easier to determine when you can automatically adjust the zero point. This method does break down when the robot is in continuous motion, but typically there are periods of time within a match (and certainly before the match starts), where the gyro can update its zero point. During bench testing, I was able to achieve a heading drift of under 2 degrees per hour when stationary. The heading calculation algorithm would automatically switch between the gyros at a 20 deg/sec rate (67% of full scale of the low rate gyro).

I’m not too familiar with gyros. Do you have any sample code I could take a look at?