navX 2016 Software Reset Displacement

We are looking at using the navX displacement values for automode.

We are communicating with it via SPI, and receiving yaw, pitch, roll etc. We can reset the yaw. However, I can not find a function to reset the displacement. Is there one?

Do you have any other tips for using displacement? I may have other questions moving forward.

In C++ there is a ResetDisplacement function, so I’m surprised it’s not in labview. I guess you’ll have to define your own offset variables and subtract those from the values.

The only other thing I would say is not to rely on it too much. Since it works from the accelerometer it may be inaccurate, especially if your robot shakes around (eg going over the rough terrain, rock wall, etc), and it will get more and more inaccurate over time (like yaw, hence the reset functions for both in C++).
Definitely don’t use it as the only metric for your position. Use rangefinders or image processing to confirm it.

On average, the displacement accuracy is ~1 meter per 15 seconds of operation. FRC teams usually want no more than 1cm of error in autonomous. Therefore the displacement data is likely not usable for tracking position in autonomous.

You should be aware that any modern IMU will have similar performance characteristics. We are several years away from technology that is inexpensive enough for FRC that can track position to 1cm of accuracy every 15 seconds.

More details can be found in the NavX-MXP FAQ: Frequently-asked Questions | navX-MXP

That said, the navX-MXP yaw angle is very accurate, and that’s what teams typically use.

In addition to euhlmann’s recommendations, you might consider using wheel encoder readings and perhaps a follower wheel with an encoder for rotation speed and an angle encoder for relative orientation. Some teams have also fused the wheel encoder data with the yaw angle from the NavX-MXP.

My bad, I missed adding those reset commands, oops :o. Check the Latest Release, I’ve just added those functions. Thanks for catching that.

Awesome! :smiley: I have updated our code.

We plan to use vision to finish at the end of our automode. We just want the navX to get us close.

We did some more testing last night and I can’t seem to get any reliable displacement readings. The performance seems to be far less than 1 meter/second per 15 seconds. Actually, I’m having a hard time figuring out which direction on a graph it thinks is forward. I did try running an initial gyro calibration, but it didn’t seem to help. Any suggestions? I can upload the code if you like. The gyro seems to be working fine.

I also noticed using the zero velocity function seemed to reset everything until I rebooted.

Thanks.

If you have verified that the navX-MXP is firmly mounted to the robot (i.e., conceptually it should be part of the mass of the robot frame) as described on the RoboRIO installation page, then you’ve likely achieved the performance limit.

> 1 meter/second per 15 seconds

Note that units of displacement are meters, so units should be in meters rather than meters/second.

Since you mentioned zeroing the velocity, you may be attempting to integrate velocity to derive a displacement estimate. If you are indeed attempting to integrate the navX-MXP velocity estimates your results on average will be worse than if you use the displacement estimates calculated by the navX-MXP.

Here’s a link to the paper describing in detail the algorithms used on the navX-MXP: http://www.nxp.com/files/sensors/doc/app_note/AN3397.pdf

As you can see in the paper, this is a very complex algorithm, and is prone to a number of noise factors and other error-inducing conditions.

I highly recommend students interested in motion processing to become familiar with these algorithms, which is why they are present in the navX-MXP. As the technology advances in the upcoming years, it’s expected that these same algorithms will continue to be used.

I’d be happy to review your code and look for issues - but please be aware my purpose in doing so would be for educational reasons. Since I understand your focus at the moment is positioning in autonomous, I want to reiterate this recommendation:

Bottom-line is that for autonomous our recommendation is to use some combination of navX-MXP’s very accurate yaw angle with wheel encoders and/or sensors that measure distance to known pieces on the field (vision processing, ultrasonic, lidar). And as others have mentioned, due to the rough terrain encountered when crossing the defenses which can disturb encoders, using distance-measuring sensors and fusing that with navX-MXP yaw is likely the best approach.

One of the many challenging parts of the displacement integration algorithm (referenced in my last post) is that in addition to the double-integration, sensor data filtering, trapezoidal integration, removal of gravity from the accelerometers, etc., the algorithm also has to deal with the potential rotation of the robot during the time when integration is occurring.

Therefore, the X/Y units of displacement are in “world reference” frame. This means that they the Y axis is pointing to whatever direction maps to 0 degrees on the navX-MXP yaw angle. So it’s important to ensure that the navX-MXP has completed startup calibration, and that the yaw has been zeroed recently (typically, immediately before integration begins). From that point on, the displacement X/Y axis data is in alignment with that zero reference point.

I’m pretty sure this is our problem. The roboRIO and navX are mounted on rubber shock absorbers and attached with Velcro. :ahh: :ahh:

I am using the displacement values calculated by the navX. What I meant was, by running the “Z900_navX_Set_Zero_Velocity.vi” subsequent returned values were zero until the system was rebooted.