Need help using NavX Micro with Linux on Jetson TK1

Hi Everyone,

Our team is looking to mount the NavX Micro to our onboard vision processor (the Jetson TK1) which is a Linux ARM based machine. Currently the only drivers I have found for the NavX are for Windows. Does anyone have any experience using this device with Linux or any idea how to go about implementing this. Thanks in advance.

I may be wrong, but I think the NavX (no experience with it) uses a USB-UART interface. You may be lucky and have drivers installed on the Jetson.

You can do a ls /dev/ttyUSB* to see if it shows up. You may also try a ls /dev/tty*. In my experience, they show up as a ttyUSB# device, usually 0.

The navX-Micro has I2C and USB interfaces; not sure which interface you’d select. USB is faster, and according to this articleon the web, Linux has a built-in device driver that’d work with it:

“This sample code creates a USB connected virtual COM port, using the USB CDC class (Communications Device Class.) On Linux, this class does not require a driver: it is supported directly by the kernel. Simply put, it makes the STM32F4 function similar to a USB-serial adapter.”

Next, are you wanting to interface to this from C++ or Java on the TK1?

I believe your vision code is all C++.

Based upon that, my thinking is you’d USB and C++.

A place to start would be the RoboRIO C++ class sources that work w/the WPI Library. [There are Java sources that are very similar, if you choose that language instead.]

You will need to modify the “IO” module (SerialIO.cpp or RegisterIOI2C.cpp) that corresponds to either I2C or the Serial Port. And you’ll need to modify the AHRS class to remove the WPI Library code that inherits from PIDSource and LiveWindowSendable.

It’ll be a bit of work, but I think it’s doable. James Parks on your team is familiar w/the serial protocol. And you can contact me at [email protected] if you have any questions.

I’m curious about your vision processing architecture that makes it better to have the IMU attached to the vision processor, rather then the roboRIO. In FRC, it’s most common to use the gyro to close the loop, with setpoint updates from vision processing. In that case, it’s much better to have the gyro connected to the roboRIO, as you need low latency when closing the loop, but can afford higher latency on setpoint updates.

Who says you can’t have both? :slight_smile: