Our team was wondering if it would be possible/practical to buy a bunch of cheap accelerometers and average their values to get less noisy data. Currently we are looking into buying 5-10 of either the LIS3DH accelerometers or the ADXL343 accelerometers. Any help/feedback is appreciated! (Also any data if you have it!)
I looked into this a while back. Assuming the noise in the data from the individual accelerometers is white and uncorrelated, the accuracy of the averaged data increases with the square root of the number of accelerometers. I’ll try to find some of the relevant research in the morning.
What are you using the output of the accelerometers for exactly?
If you just want less noisy acceleration values, I would recommend looking at software filtering first. The frc-docs accelerometer software section does a good job at explaining this and how to use the filtering classes that are already in WPILIB.
We already tried this, and while it helped it wasn’t accurate enough.
Can you explain what you’re trying to do? In FRC, accelerometers are generally useful for either detecting collisions, or sensing the “static” acceleration of gravity (e.g, “is my robot tilted?”). If you’re trying to double-integrate acceleration for navigation, you’re pretty much out of luck. As Kauai Labs explains, current MEMS accelerometers just aren’t good enough for dead-reckoning navigation. More sophisticated systems can maintain a good inertial fix for quite a long time, but they’re probably not applicable to FRC.
I was trying to get acceleration and velocity data, less noisy than position but still pretty bad. Would this be feasible? I was going to use this along with encoder odometry so I’m not too worried about getting super accurate velocity estimates, but this would be the only sensor that directly measures acceleration.
The problem with pure integration is that errors accumulate. You could sit perfectly still for the duration of a match, but because you’re still integrating the accelerometer noise, your math might say that you have a 200 ft/s velocity by the end of the match. The solution to this, or at least partial solution, is what is generically called “sensor fusion”. The idea is to semi-intelligently blend all of the various sensor data at your disposal into a coherent robot “state”*.
You want some kind of algorithm that can say, “my accelerometer is telling me that I’m accelerating forwards slightly, but my wheel encoders aren’t moving; it’s probably accelerometer noise.” At the same time, you want it to be able to say, “my accelerometer just registered a really big acceleration to the left, but the wheel encoders didn’t move; I probably got rammed by another robot, and I shouldn’t trust the encoders for a while.”
This sort of thing is often implemented as a Kalman filter. I recommend @calcmogul’s book for a mathematical treatment, but check out the ROS robot_localization node documentation for a good flexible implementation. What I like about that implementation is that it’s easy to specify how specific sensors should be used. For example, you can tell it to allow accelerometers to contribute only to the acceleration part of the state, allow wheel encoders to contribute to velocity and acceleration, and occasionally update the position with vision processing results.
*State is basically just a representation of what your robot is doing at a given instant in time. Aircraft, for example, have to deal with all 6 dimensions (latitude, longitude, altitude, pitch, roll, yaw) and their various time derivatives. In FRC, since our robots are (usually) on the floor, that can be simplified to (x, y, theta) and their time derivatives.
That’s actually what I was planning on doing (totally forgot to mention that, sorry), currently I’m planning on just using the navX gyro for angle so have the state vector be X and Y position, velocity and acceleration, though I’m not sure about that last one as the only sensor that directly measures acceleration would be the accelerometer. Would there be any issues with just taking the second derivative of encoder measurements to get acceleration measurements?
This is a really interesting/involved area. Google “sensor fusion imu low cost” as one starting point – there’s a ton of stuff out there in this space. My sense of things is that this is a better approach than trying to average multiple accelerometers, some decent options are pretty cheap.
If you want to try something a little more custom, here’s a link to one possible starting point. Not super cheap, but perhaps worth the price.
Searching for other threads here turns up Comparison of IMUs for FRC, which looks helpful.
Also see (heading on this page): Can Sensor Fusion Fix a Poor Quality Sensor?
Just the topic of noise is pretty involved, as it turns out.
Why do you want velocity and acceleration? Are you planning to use this data in control loops for path following type applications?
If so, I think you would have better luck using encoders on your wheels rather than accelerometers as your sensors. Encoders would not have the issues with integration error or noise that others have brought up.
If you are worried about wheel slip / scrub introducing error, you could add a dedicated two axis follower wheel using two orthogonal omni-wheels with encoders on each shaft that could measure the X and Y motion of the robot independent of the drive axles.
You can definitely answer this theoretically, but it might be more illustrative to answer it by buying a bunch of accelerometers and putting them into a simple Kalman filter. You can get packs of MPU6050s on Amazon for very cheap, easily readable by an Arduino over I2C. Some questions you should think about during this:
- Is the noise Gaussian (normally distributed)? If not, is it a reasonable approximation or is there some transform you can do to make it so?
- How can you quantify the noise as a function of the accelerometer?
- Is the noise on each axis independent from the other axes? How do you determine the covariance?
These are all essential questions for creating a system (eg Kalman filter) that removes the noise.
With no BOM I can finally use my laser ring gyros
Two words. Kalman Filter.
Curious for a thought experiment… could there be usage in some “cube” of PCB’s with ~100 sensors, sampled synchronously then averaged by some central processor?
Or, though ASIC design seems a bit “out of scope” for FRC… why not take said 100 sensors and put them on a single die?
I’m not quite sure what to google to figure out if someone has done this or not yet. I guess the real question becomes, how many sensors do you need before the results are same-as or better-than encoders/kalman & friends?
That was actually kind of similar to my original idea, get maybe a square/stack of 9, feed all of that data into a kalman filter ona an arduino at 5khz(depending on the sensor), then maybe average all those values over 1ms and send it to the roborio like that. Would that help?
Noise effects can be minimized in hardware (multiple sensors and/or filtering if analogue signal) and/or software (filtering).
Drift is more difficult to deal with though using sensor fusion to “recalibrate” periodically can help. My understanding is that the MEMS accelerometers suffer from drift over time.
I’ve floated the idea of using three gyros on a “voting” system before, but we’ve never tried it. For each sample, take the average of the two closest values and then re-zero all three to the new value. I tried an excel simulation of this using the RAND() function to simulate noise and it wasn’t incredibly effective so I dropped the idea.
Edit: in retrospect, this wasn’t a terribly helpful post.
I think it’s very helpful, just for a different problem.
The new WPILib EKF and UKF implementations would probably be more appropriate than the ones in ROS for FRC.
For this problem users can copy and modify the wrappers we did that fuse vision and encoder measurements to get started (although I wouldn’t recommend it; fusing vision and encoders is likely going to work out much better than fusing accelerometer data.)
Everyone likes to suggest a Kalman filter without getting into the hairy details. Hopefully I can give a somewhat coherent suggestion of these details. I’m going to make the assumption that the reader understands the different basic parts involved in making a model for a Kalman filter. If you’re a little lost then you should take some time to understand the role of a state, measurement, and input vector, and the way that the process and measurement model functons f and y map these vectors to each other in a UKF or EKF (Tyler’s book is a good resource for this.)
You’ll likely want to base your KF model off WPILib’s DifferentialDriveStateEstimator
(which actually uses a UKF.) You can keep the state vector (\mathbf{x} = [x, y, \theta, v_l, v_r, d_l, d_r, Verr_l, Verr_r, \theta err]) and the input vector (\mathbf{u} = [V_x, V_y]) the same. Finally, you can replace (assuming you don’t want to fuse vision measurements) the global measurement vector (\mathbf{y} = [x, y, \theta]) with one or more new measurement vectors for each accelerometer you have and run the correct step for each accelerometer’s vector; each of these measurement vectors should be \mathbf{y} = [a_x, a_y] (you could also add a \theta here if you’re using IMUs that can also estimate robot yaw.) Now that you’ve done this all you need is to come up with a function to map your states and inputs to these measurements… I was going to write this out here but I got a little tired of it (the dynamics math is rather long.) You can mostly copy what 971 has into a function y(\mathbf{x}, \mathbf{u}). The basic idea of this math is that they’re mapping some voltages applied to the wheels to whole-robot accelerations. Note that their state vector doesn’t include x, y, or \theta (which is why they can use a KF instead of a nonlinear observer) so you’ll have to add some extra zeroes accordingly.
EDIT: After reading 971’s dynamics math a little more I’m not exactly sure why they divide by g… I think it’s because their gyro (the ADIS16470) returns multiples of g.
Averaging data from multiple accelerometers should yield the theoretical square root noise reduction improvement.
However, all of this assumes (a) that the data from each accelerometer is acquired at the same instant in time and (b) that each accelerometer IC is mounted in precisely the same orientation (any skew here will introduce errors).
So to be successful, be prepared to spend sufficient engineering time to carefully manage the timing of data from each of these sensors to handle (a) and to perform very careful sensor alignment when laying out your circuit board [or alternatively, prepare yourself for some painstaking calibration work to correct for axial misalignment between sensors].
If you take care of all that, keep in mind that to actually use this information, if your robot turns at all during motion you will need to rotate the acceleration vectors from “sensor reference frame” to “robot reference frame”. Since heading is used here might be have error in it, this will introduce error as the acceleration vectors are rotated.
There are other tricks that are used to reduce error in the eventual calculation of velocity and displacement. These include trapezoidal integration when calculating velocity. This is a great reference, which is followed in the navX2-Micro and the soon-to-be-released navX2-MXP. https://pdocs.kauailabs.com/navx-mxp/wp-content/uploads/2015/04/ImplementingPositioningAlgorithmsUsingAccelerometers.pdf
There is also another technique referred to as zero-velocity updates which are also used. The basic idea here is if you can detect when the robot is not moving (for instance, when the acceleration values on X/Y axes get very small), the integration of accelerometer data can be temporarily stopped, which will help remove errors.
Finally, be sure to get good accelerometers. The ST Micro ISM330DHCX IMU used in the new navX2-MXP has accelerometer noise specs of 60ug/sqrt hz. So if you used 5, the theoretical noise would be about 27ug/sqrt hz (50/sqrt(5)), if you pay attention to issues (a) and (b) mentioned above. A nice attribute of an IC like this is that it has a FIFO for data, and each sample can be timestamped with a very precise microsecond-scale timestamp. You’ll find that helpful if you’re going to be sampling data from multiple accelerometers simultaneously.
Enjoy your efforts, I expect they will be very intellectually rewarding.