This summer, Vex is releasing accelerometer & gyroscope sensors, and I’ve been trying to figure out how they work and what they’d be used for. I tried reading posts from multiple sources and kinda sorta have an impression of how they work, but then I’ll read something that seems to contradict the what I had mentally constructed from what I read previously.
Can anyone explain to a not-very-techie-type person (think “stay-at-home mom drafted as technical mentor/coach”) how these 2 sensors work, and give some instances of when you would find them useful?
Think of the accelerometer as a big flat plate with a plumb bob (string with a weight on the end) hanging down from it. As you accelerate the plate in a horizontal direction, the angle of the string relative to the plate will change. As you tip the plate the angle of the string relative to the plate will change. The value returned by the accellerometer is, essentially, the angle of the string. Thus if you hold the plate level, you are measuring accelleration. If you hold the plate at constant velocity, you are measuring a change in the angle (tipping) of the plate. (Which, since gravity counts as acceleration, is the same thing, even though the plate isn’t actually speeding up or slowing down, but I digress…)
In reality, it kind of works upside down, as my understanding of the MEMS (micro electro mechanical systems) devices we use acutally create a bubble inside the chip that floats, and it is the position of the bubble that gets returned. Same thing, but upside down.
A gyro uses… well… something similar, I think, to measure twist. I’ll have to look up what actually goes on inside a MEMS gyro, or let someone else describe that here, however what the gyro is sensing and returning is how fast it is being turned.
Note that both sensors detect the rate of change in a parameter. This is a great opportunity to talk about derivatives and integrals with students. If you want to know what direction your robot is pointing, you use a gyro. By repeatedly measuring the rate of change in direction (turning one direction is positive, the other negative), you can create a graph. The cumulative area under the curve of the graph is… approximately… how far you have turned.
Likewise to determine your speed, hold the accelerometer flat, and rapidly measure your acceleration. Integrate that value and you have your velocity. Integrate that and you have your position.
There are several factors that make this a little more challenging to do than it sounds, and it is possible that you can use pre-defined software libraries that make this easier to do than it sounds (someone else takes care of the measuring and integrating for you), but that is a simple description of what the two sensors do.
We used a gyro on our robot this year to make sure it would track in a straight line. If the robot turned (without instruction from the driver) the gyro would send a message saying “I’m turning in this direction at this speed”. The robot would then automatically speed up the wheels on one side and slow down (or reverse) the wheels on the other side to compensate and put the robot back on track. It worked great! Had we got the code working right, we could have used those measurements, integrated with respect to time, to determine which direction we were facing and help with navigating the corners of the course in hybrid mode.
Essentially, you have a mass (the middle bit) attached by springs (the bits that wind back and forth) to the frame (the outer bit). When you accelerate the frame, the mass lags behind it. The distance it lags is proportional to the acceleration.
This gives us an acceleration-to-relative-position transducer. By measuring the relative position of the mass compared to the frame, we know acceleration. This is usually done capacitively. Capacitance is proportional to the distance between the plates, so now we have an acceleration-to-capacitance transducer.
To make the final leap, a circuit must convert capacitance to voltage. This is done in many ways. The easiest to conceptualize is putting an RC oscillator that varies its frequency with C. Measuring frequency is easily accomplished with digital logic.
In reality, companies will use more interesting / complicated methods to make the final capacitance to voltage leap. However, the basic mass spring -> capacitance is reasonably standard. The innovation there is how many axis one can fit on the same silicon.
And that creates a MEMs accelerometer. By changing the geometry of the springs and fingers you can create a gyro, but I have a harder time clearly showing this geometry.
Every FIRST robot that I have worked on has used quadrature encoders to measure both distance traveled and velocity. So I’m curious, how does using an accelerometer for velocity and distance traveled compare to an encoder? Is it more or less accurate?
Now that I think about it, I’ve never used an accelerometer on any robots that I’ve built, and I’ve been building quite a few robots over the last 5 years. What other things do FIRST teams use accelerometers for?
Since this is for noobs, can anyone give details on which encoder, gyro, accelerameter to buy. How to set one up and tips on making them all work perfectly. Also calibrating them. Thanks, this will help a one of the new teams by us. We have only used a gyro so far.
So just to make sure I have this straight, it sounds like an accelerometer would tell me when a robot has speeded up, slowed down or tilted. During autonomous, it would be useful for telling if I had unexpectedly hit an obstacle, causing the bot to slow down or tilt.
I might use a gyro in autonomous when trying to see if the bot had unexpectedly swerved, for example, from unbalanced motors, or from being hit from the side. I could use it to correct course if the robot is tending to veer off a straight line or desired course. Is that right?
Can you think of practical examples of using these sensors during operator control? The only sensors we have ever used during OC are limit sensors – to stop an arm from rising when it reached the perfect height, and to stop a joystick-happy driver from grinding the gears.
Right. An accelerometer gives you a signal proportional to acceleration (or a negative value for deceleration.) It is giving you a number that says, “I am speeding up/slowing down at this rate.” Or to use a car speed analogy, it reports a value like “I am increasing my speed at 5 mph per second.” If you were still, it would report zero mph per second, or also if you had a really really good cruise control and were constantly at the same speed, it would report zero mph per second. That’s good, right?
But personally, I don’t know how useful acceleration data actually is. The cop isn’t going to pull me over for accelerating too fast (except reckless driving maybe?), he’s going to pull me over for having a speed that’s too high. But all the information I have from my sensor is acceleration. Well, if I know what speed I start at (lets say 10 mph) and that I then increased my speed at 5 mph per second for three seconds, I can find my new speed to be 25 mph. So even though I am only directly measuring my acceleration, I can calculate my velocity from that, as long as I know my initial velocity.
Similarly, one can go from velocity to position. If my car went at 25 meters per second for 4 seconds, and 10 meters per second for 3 seconds, then I’ve traveled 130 meters.
This type of trick, essentially performing an integral of acceleration to get velocity, and then an integral of that to get position, is commonly used in order to extract needed information from sensors that don’t necessarily provide the quantities you need.
This can also be done with a gyro, where the original signal coming from the gyro is related to the angular velocity, or how fast it is turning. So it will essentially say “I am turning at 45 degrees per second.” Now if we refer to the above, we can infer that we can say well, if I’ve done that for 3 seconds, that means I’ve turned 135 degrees! So you can use a gyro to keep track of your heading angle. You could then compare it to what the heading angle SHOULD be, and then adjust based on that. So you’re absolutely right about that potential use of the gyro.
A lot of teams turn off their navigation-related sensors during tele-operated mode, or don’t use them anyways. But there has been at least one team that uses their gyro to help them drive perfectly straight during tele-operated mode, and to do very precise turns (1024, at least that what I got from talking to them in the pit.) So there is certainly scope and application for using navigation in tele-operated mode.
Now you do have to be careful–what I’ve explained above is kinda the gist of how to use the sensors, conceptually. But there are a whole host of other issues to deal with–when/if you start using the sensors, you’ll see them really quick:
1.) Sensors that you use are real devices, not ideal. This means that they’ll have measurement noise and drift and all sorts of funny characteristics like temperature dependence. It’s not scary, just a couple more lines of code and a little more thought you may need to add to compensate. Or, you may get enough accuracy for what you want to do without having to compensate. Actually, I’d try that first.
2.) Doing the integration process above takes a little understanding of how to do it algorithmically, and should be done on a fixed interval to make the math easier to handle (and more stable, mathematically, but that’s a different story.) The integration process also introduces errors but doing it once it isn’t so bad (say for a gyro to get your heading angle.) But doing it twice to go from accelerometer to position may be too inaccurate. I suspect this is a large reason why most people in FIRST use encoders combined with gyros to figure out their position.
3.) Note that there are scaling factors and stuff to apply to translate the sensor’s method of communicating to the math you’re using. It’s not so bad though, just a little multiplication.
Most often, it seems, people use encoders and potentiometers for on-robot information (arm position and stuff) for both teleop and autonomous, and encoders on the wheels combined with a gyro for position estimation.
Hope this helps! This is kindof a “the gist of it” explanation, so for more details just ask!
There is also more than one way to make a MEMS gyro. This link http://www.sensorsmag.com/articles/0203/14/ is pretty good at explaining gyros in the second half of the article, and although it might not be introductory-level reading throughout, it’s a pretty good little write up on accelerometers and gyros.
Using a gyro and an accelerometer, you can tell where your robot is on the field at all times, this is especially useful during autonomous.
You can measure the acceleration to a point, continue at a constant speed then measure how long it takes to slow to a stop, timing the whole process. With this information, you can determine how far the robot has gone. Then you can turn 90 degrees counter-clockwise, the gyro will make sure that it is 90 degrees.
A gyro can be useful during teleop, as well. Let’s say there’s an objective that requires precise and accurate driving, like 2007. Now, you have your robot all lined up ready to score, a team comes by, hits your robot and rotates it enough to keep you from scoring. With a gyro, you can sense rotation that wasn’t caused by your own robot and have it correct itself based on the readings from the gyro. This will quickly and precisely line up your robot again and will avoid any human error that would arise in this situation without a gyro.
Some teams have used a gyro to create a field based control with their crab drives. I don’t know much about this, so I wont go into it.
Completely dependent on the specific part in question. Digital accelerometers are in the range of 8 to 12 bits ( a couple thousandths of a g per count), but you can buy better or worse. Really what kills you is part to part variability - you can easily get 20% sensitivity variability and/or offset variability, so be sure to calibrate if you need to. Also, some of them are really temperature dependent.
sorry if these seem like stupid questions but; does g stand for grams or is it the acceleration of earth’s gravity. also I am I correct to assume the lower the magnitude of g the better?
I think the confusion revolves around what you mean by better. Do you mean easier to measure? Or which is greater?
100 g is a much greater acceleration than 1 g. If you are talking about an option between a sensor with a range of 1 g and a sensor with a range of 100g then it matters what you are measuring and what kind of accuracy you need.
For a given resolution (in bits) a sensor with a larger range will have larger steps (less accurate for small changes in acceleration). A sensor with a smaller range may be saturated if the acceleration is larger than the range (5g of acceleration applied to a 1g sensor).
It depends on the application (i.e. what are you using it for?).
If you’re using the accelerometer for an anti-lock brake system, you would want to use a low-g range, like a 1.5g accelerometer, since the largest acceleration you should see under braking is about 1g, (the extra 0.5g is because you’ll want a little room for vibrations and noise so you don’t get distortion in your filters, but that’s a more advanced topic).
If you’re using the accelerometer for a side-impact airbag crash sensor, you’ll want to use an accelerometer in the 250g range since you mount it right next to the impact and you can see accelerations well above 100 g’s.
The idea is that you pick your sensor by determining (from physics or tests) what is the largest acceleration you will see. You should then select a sensor that can measure with just a little more range than this maximum.
So if I wanted to use an accelerometer to tell how fast a robot is moving (constantly adding/ subtracting the acceleration from the speed) , I would want a 2g accelerometer since the field is 54 feet in length, which divided by 32.174(earths gravity) = 1.678 which rounds up to 2.
would that be correct?
How accurate is a 2g accelerometer?
Time for some dimensional analysis: 54 ft ÷ 32.174 ft/s2 = 1.678 s2. Since g is not given in s2 (seconds squared), you probably shouldn’t rely on that calculation.
Constant acceleration of 2g means that for every second the robot moves, its velocity increases by 2 × 9.81 m/s = 19.62 m/s = 64.35 ft/s. I can’t think of any FIRST robot that can sustain that sort of acceleration for any meaningful length of time. (So it turns out that that might be sufficient for measuring your robot’s acceleration due to its own drivetrain, but for different reasons than you suggested.)
How fast is your robot, and approximately how long does it take before it reaches its top speed? That will help you find your average acceleration. You’ll want a sensor that can handle that range for sure, but like Chris said, if you’re depending on this sensor to measure anything but the idealized performance of your drivetrain—for instance, stopping due to hitting things on the field—you’ll need to expand the range by some unknown amount.
The accuracy of the device itself will be published on its datasheet, provided by the manufacturer (or sometimes the distributor of the part). It’s usually a function of the device’s temperature, so many incorporate a temperature sensor.
As a side note, if you want to know how far your robot has travelled, you’re much better off using encoders. With that being said, I have 10 years of engineering experience in control systems using intertial sensors (inertial sensors = accelerometers and gyros), so I enjoy talking about this stuff. So, here goes the answer to your question…
Is your robot slip limited? In other words, if you put your robot on the carpet facing against a wall and you apply full power, do the wheel slip or do the motors stall?
If the robot is slip limited, the highest acceleration that the robot can experience under its own power is the coefficient of friction between the wheels and the carpet (if you don’t believe that, go through the math/physics, if you wish). Therefore, if your coefficient of friction is 1.5, you can accelerate up to 1.5g. Once again, you’d want to pad this to account for vibration and noise, so you might want to select a 3g accelerometer. Why pad for vibration and noise? See the “VERY IMPORTANT” section below.
If the robot is torque limited (wheels don’t ever slip), then you need to calculate the force applied to the carpet by all of the wheels. You can do this by either going through all of the gear-train calculations, or just put a scale against the wall as you drive into it. Then use Newton’s handy-dandy 2nd law (Force = mass*acceleration) to solve for the maximum acceleration.
VERY IMPORTANT: If you EVER saturate the accelerometer, your velocity and distance caculations will be forever wrong after that point. For example, let’s say you are using a 3 g accelerometer and your robot bumps into something and experiences 4 g’s for 0.1 seconds. Your accelerometer can only measure 3 g’s, so for 0.1 sec you think you’re accelerating 3 g’s when in fact you’re actually accelerating 4 g’s. Your velocity calculation will then be off by 3.2 ft/s and your position calculation error will grow to infinity (at a rate of 3.2 feet per second - see the connection?). Not a good situation. There is a lot of engineering work that goes into solving these types of problems on antilock brake systems and traction control systems - a lot more time than you have in the 6 week period, which is why I would suggest encoders.