Integrating ROS node into roboRIO for SLAM

Hello, I am working on integrating ROS into our robot for SLAM and fully autonomous navigation, and I wanted to ask if anyone has tried integrating ROS into their roboRIO alongside the WPI software?

The advanced stuff we will take of, the only important part is getting a ROS node going on the roboRIO. Further information on the intended setup and usage can be found below. We would be following this:

The basic hardware setup is like this:

  • Jetson TX2 running ROS Kinetic (the computing powerhouse)
  • Kinect to use as a RGBD sensor
  • RpLidar for 360 degree laser scanning
  • RoboRIO reports gyro and wheel odometry for further accuracy enhancement
  • Stereo camera utilizing the TX2’s GPU cores for stereo vision and target tracking

For software, ROS handles high level functionality like SLAM and path planning:

  • The roboRIO published the desired field position in a ROS node.
  • roboRIO would send gyro and wheel data via a ROS node.
  • Other ROS nodes take in sensor data from the Kinect and Lidar.
  • RTAB-Map handles SLAM and path planning and determines the required robot velocities (X, Y, and R)
  • A node on the roboRIO would receive the velocities, and pass that on to the normal robot WPI software.
  • The stereo camera is for dedicated target tracking.
  • Possible deep learning on TX2 using TensorFlow?

Ping @marshall

1 Like

There is a lot more work for your project than you likely realize.

If you do some Google searching for ROS and FRC then you’ll find this:

You’ll also find this:

We’re the only FRC team that has fully integrated ROS to date and there are good reasons for that - the biggest one by far is that ROS is hard right now (Most folks have a hard time learning programming skills - ROS requires those, knowledge of Linux, and knowledge of ROS), beyond that, the RoboRIO isn’t great for running it. There are also a lot of dependencies and eccentricities that we’ve spent a couple years learning the hard way.

With all that being said, we want to encourage more teams to try it out so we keep publishing white papers and offering assistance where we can - hopefully the next white paper we are working on will lead more folks to take a look at what we’ve done.

We had a handful of students give a talk in Houston at CMP about our work and it was recorded so hopefully that will get posted before too long.


We actually experimented with this in Java.
There’s a ROS client library called “ROSJava” that allows for you to run ROS nodes in Java and communicate with a master. Unfortunately, it is no longer officially supported, but it still works with at least ROS Kinetic.
We managed to get the roborio to subscribe and send messages to ROS topics over the robot network. I’ll try to post a link to our code once I get a chance to.

1 Like

At least as of 2019, that statement is incorrect - if I recall correctly team 2230 from Israel used ROS on their robot this year and received the DCMP IoC award for their usage.


Using ROS on part of the robot and fully integrating it are 2 very different things.


Thanks for your reply @marshall I have a good amount of experience with ROS and Linux, using ROS itself won’t be an issue here. My goal is to have a minimalistic ROS node on the roboRIO just to get some very basic data accross. Stuff like a desired robot pose, simple commands, robot velocity commands, odometry, etc. My other alternative is to use network tables on the TX2 inside one of the ROS nodes there. This would avoid having to install ROS on the roboRIO. Would you recommend that instead?

That doesn’t sound minimalistic to me. It also sounds like more than one node with several publishers and subscribers.

Have you looked at what is required to get your minimalistic node running on the RoboRIO? I’d start there.

You can start by having a look at our ROSCon presentation and our white papers - the most recent one is linked to above but all of them are at and we’ll have a new one before too long.

If you’ve still got questions after that then shoot an email to Can’t promise we can help with everything but we answer all of the emails we get to it.

I can’t comment about trying to use NetworkTables but I suspect that won’t give you the ROS experience you likely want.

I meant minimalistic on the roboRIO side of things. That will have 95% normal robot code with the exception of the bit that communicates to the TX2. The TX2 will run Ubuntu 16.04 with ROS kinetic. THAT part will be jam packed with all the fun stuff. I’ll work on a block diagram or something later to show you what I mean.

Edit: I’ll take a look at what I need to get a ROS node on the RIO. If it’s too complicated, I might look at the network tables solution. Ideally, I want to make it so that the TX2 can work with any robot. All that would be needed are a few parameters like sensor TF’s, robot dimensions, drivetrain limitations, etc.

General Angles from israel, have a full integrated ros with wpilib, i dont know how they did that, but i seen it working, they use it to test their code and their vision code.
i think that this year they are trying to integrate it with gazebo. but i am not sure.

They are realy nice, i think you should try to ask them for more information than what i know.
Thank you and have a nice summer.

I’m curious if you’ve had any luck getting rtab-map working using real field data. From what we’ve seen, the laser data isn’t all that great with all the poly and diamond plate - have you had any luck with it?

Same with path planning and driving - any luck with move_base settings that handle driving at typical FRC robot speeds in a dynamic environment given the TX2’s processing power? If so, we’d love to see which local and global planner you’ve settled on, along with config files.

1 Like

We’re still pretty early on in development. RTAB-Map has mapping mode and localization mode. If those particular features are troublesome, they might just be ignored, post-processed, edited, deleted, or whatever after mapping. There’s still a lot of experimentation to do, so it’s gonna be a while before we have anything that works.

Hey BigChungus, I’m doing exactly the same thing as you! what a coincidence! I’m using the RPLidar A1 and a Jetson tx1, I’ve not got my stereo camera set up yet, I may actually stick to using a limelight for basic tape vision.

Yup we did, ran it on a separate laptop that is physically on the robot,

The white thing is a laptop…
Code if you want it: that dank code


Update: I have got something working in Gazebo simulation and i posted the code in Github. It still needs a ton of work, but it looks promising. If you have a computer with Ubuntu 16.04 and ROS Kinetic installed, you should be able to run everything as long as you have the right ROS packages installed.

I will report back when I have made further progress.

Here’s the github link:
And the youtube video of the simulation:

1 Like