Thread: IMUs
View Single Post
  #15   Spotlight this post!  
Unread 18-01-2016, 12:21
juchong's Avatar
juchong juchong is offline
Electrical Engineer
AKA: Juan Chong
FRC #2655 (Flying Platypi)
Team Role: Engineer
 
Join Date: Aug 2008
Rookie Year: 2008
Location: Greensboro, NC
Posts: 105
juchong is a jewel in the roughjuchong is a jewel in the roughjuchong is a jewel in the rough
Re: IMUs

Quote:
Originally Posted by otherguy View Post
ADXL345 - this has been in the KOP for a number of years. It's your standard rate gyro. In my experience you're looking about a degree of drift every few seconds. The drift makes using this sensor difficult.
This sensor is actually an accelerometer, not a gyro!

Quote:
Originally Posted by otherguy View Post
ADXRS453 Gyro ($76 eval board from digikey). We used this last year with great success. I believe a number of other teams have used this sensor as well. Pretty sure it's what's on the Spartan MXP board that came out this year. We have observed about 1-2 degrees of drift over the duration of the match. Our code to interface to it. This is a 1 axis gyro, so it's just going to be able to provide heading. And like other rate gyros, it requires calibration to take place when the robot is sitting still. We've written code to detect when the calibration is innacurate and force a re-cal. If you go this route, pay attention to the method supporting calibration inside of the Robot class.
Everyone will be receiving an ADXRS450 in their FIRST Choice shipment this year! This sensor won't give you position information, but it should be helpful for crossing defenses in autonomous. Code was also added to the official WPI libraries, so getting started should be easy!

Quote:
Originally Posted by otherguy View Post
Joe, thanks for pointing that out. I hadn't looked at the specs on this board yet. Looks like the difference between the 450 and 453 is 25 deg/hr vs 16 deg/hr drift rating respectively.
The big difference between the 450 and the 453 is an additional calibration step at temperature.

Quote:
Originally Posted by Sparks333 View Post
Hello!

Robotics engineer specializing in inertial nav here. Pretty much all inexpensive chipscale MEMs IMUs are going to perform fairly similarly in terms of noise and stability over time, which is to say, fairly terribly when trying to do positioning. There are many barriers that make this difficult, but the way to think about it is that you're integrating acceleration twice to find position, and the sensor rarely if ever is calibrated such that motionless reads exactly zero acceleration - something called a sensor bias. In practice, what this means is that if you just sit there integrating, you'll slowly drift off into outer space, and with these sensors the drift will be meters per minute (and it's exponential). Some sensors and software packages are capable of detecting zero-motion conditions (literally, when the acceleration looks like it's so small that the robot assumes there is no motion), and takes that opportunity to get a better estimate of its biases (known in the lingo as a zero-velocity update, or ZUPT), but with how often a FIRST robot is stationary I doubt it's going to get a solid estimate during the game.

The answer to many of life's difficulties to finding position with an IMU is to have an external aid of some sort - usually an encoder is preferred - and then fuse the IMU data and encoder data (encoders can be used as velocities or incremental position estimates, we've had better luck in incremental position mode, simply because differentiation is usually noisy). That allows you to take different measurements, even ones that seem to disagree slightly, and put them together in a way that trusts particular sensors in different scenarios. The typical way to do that is a kalman filter, but that may be beyond the scope of all but the most dedicated FIRST teams.

Instead, what is usually done when you must have displacement but don't have the time and/or resources to do a full inertial nav system is to use an IMU in AHRS mode and then use the encoders to get incremental position. Use the AHRS to find the direction of your motion, and the encoders to get the magnitude of the motion, and hey presto, a not-terrible position estimate. Note that it will degrade pretty rapidly, and spinning your wheels without moving (like getting in a shoving match) will destroy the position estimate, but it circumvents a lot of the typical issues of an inertial-only estimate - error is linear with time instead of exponential, and guaranteed to stop drifting when motionless.

Using external aids like the vision system and field markers is the hands-down best way to ensure good field position estimates, but is often difficult and/or computationally expensive. FIRST does a great job of making some of the more advanced sensors easy to use, but it still takes an awful lot of finagling to get everything working right. In short, if you're going to do something clever like that, leave plenty of time for tweaking.

Sparks
I agree! Consumer-grade sensors don't usually include calibration to remove misalignments, offset, etc. from the sensor outputs. That's why I encourage teams to use the ADIS16448! The sensor doesn't have a built-in AHRS mode, but I've put together an AHRS library which calculates Euler angles and allows your robot to use them for navigation in LabVIEW!
__________________
Teams I've worked with:My Website: http://www.juanjchong.com/
What I do: Analog Devices iSensor Product Engineer

Last edited by juchong : 18-01-2016 at 12:23.