How to get reliable linear motion data from NavX

My team is experimenting with the NavX, mainly using it to correct our course of motion. However, most of the data that we are getting back is quite unreliable. When we have moved our robot 5 feet, the data we get back rarely peaks above 0.1 meters. I know that the NavX is not well known for good XYZ motion accuracy, but does anyone know of some methods that could refine the data, maybe to within about an inch? If not, we could probably use other data from the device to correct ourselves.

1 Like

I wish it worked. (emphasis on EXPERIMENTAL)

Unfortunately, this technology, as far as I know, doesn’t exist yet. If you want to know your relative location, I’d highly recommend using encoders/odometry. That’s the way to go right now.

However, the NavX is a pretty nice gyro, and can be used for all sorts of path planning/trajectory applications. Maybe try looking into that?

1 Like

The displacement value is calculated by double integrating noisy accelerometer data; there is no way to get within an inch of precision this way. You can use the NavX in combination with a pair of encoders and use odometry instead.

https://docs.wpilib.org/en/latest/docs/software/kinematics-and-odometry/differential-drive-odometry.html

7 Likes

Jynx! You owe me a soda or something

1 Like

It’s not that the technology doesn’t exist, it’s that a single NavX’s sensor isn’t physically accurate enough to get the required data. You can pass it through all the filters you want, you’re never going to get a good enough picture of where the robot is without using additional sensors.

You can also get better IMUs that can measure position more accurately, but those are well beyond the FRC price range.

3 Likes

As others have mentioned, the NavX, or more specifically, the underlying inertial sensors used on the NavX, aren’t capable of the performance necessary for calculating displacement by themselves. You may be able to achieve the performance you’re looking for if you fuse several sensor types. Still, it’s unrealistic to expect a consumer-grade sensor to deliver such precise results. Even the ADIS16470 IMU offered to teams isn’t enough to maintain heading for an entire match, but it gets teams much closer. That IMUs bigger brother, the ADIS1645/497, would provide the performance to keep track of the robot for a whole match without sensor fusion, but that sensor family is too expensive to be used in FRC. Teams have achieved excellent results by fusing the 470 IMU and some encoders to generate a very accurate position estimate.

Snarky response:
Accelerate very quickly or not at all so that the acceleration measurement errors don’t impact the double integration as much.

2 Likes