ZebROS 1.1 2019 WhitePaper

In 2018, we wrote a comprehensive whitepaper explaining our groundbreaking work to introduce ROS to FRC. This year, we learned from last year’s mistakes and challenges to write better code: code that was effectively organized for automation and took advantage of more of what ROS has to offer. We also made the huge step of transfering some CAN reads and writes to our NVIDIA Jetson TX2, requiring the setup of a second hardware interface. This whitepaper covers the biggest improvements that we made this year.

We also spoke at ROSCon, an international ROS conference in Madrid, and gave a presentation on ROS for FRC at the FIRST Championship Houston event. More details on ROSCon can be found here, and the Houston video will be published eventually.

As for our current projects and future work, we’ve started a partnership with AWS RoboMaker to test their software for FRC use with ROS. We’ve been developing more accurate odometry, more resilient path planning, and smarter code automation. We’re also excited to work on some analysis of the data we record every match, hopefully making it easier to identify and debug problems in hardware and software.

Thank you to the programming mentors who supported us during our antics this year and were invaluable to our improvements. Thank you to FIRST, whose rule change made it possible to run hardware code off of the Jetson for the first time. And thank you to the NCSSM Foundation, CTRE, NVIDIA, Kauai Labs, WPI, VMWare, GitHub, and many others, who provided us with sensors, supplies, and support this year!

You can find our released code for the 2019 build season here.

The paper is available in the Labs section of our website here : https://team900.org/blog/ZebROS1.1/.


As always, I am very proud of all of you and the work you do. Thank you!


Wow, this is so great!

This is awesome. I love the use of the Canable. Any issues with it?

My next question is purely out of curiosity as there are clearly some very gifted students working on this project - how would you control an FRC robot if you could make the next control system? Would you use a RoboRio at all, or would everything be from the TX2?

We saw two issues with the Canable that are notable. One was a device boot/start issue that we can’t fully explain where the device would enumerate on the USB bus but the driver wouldn’t start. This was easy to spot with the canable as the LEDs didn’t do their thing and it was easy to fix by unplugging and replugging it or just restarting everything. However, this meant we needed to keep the canable external and that is what caused the second problem… The USB connector on the canable is fragile and can break off.

We’re fairly certain the initial issue was related to the kernel we were on but can’t conclusively say that.

As for the other part of your question - stay tuned.


Follow up - did your vision processing ever bottleneck and delay the motor control? And was the ZED doing odometry throughout?

No bottlenecks that we are aware of. We have toyed with using the Zed for visual odometry but that was not in use this past season.

Nice thing about ROS is that everything is broken up into separate processes, so the vision stuff ran as fast as it could in its own process while the rest of the code kept going in parallel. Luckily a lot of the ZED processing is in the GPU, but even with the CPU component it never really got in the way of the main control loop. One of the benefits of being forced to break up the code into modular components. A 6-core 2GHz CPU doesn’t hurt either.

Oh, and VO would likely have been handicapped by us turning down the exposure and gain to properly target the retroreflective tape. Hard to get features to track when the entire image is blank except for the retro-tape in bright green.


Great paper, it’s clearly more than just an incremental update on the papers from the last 2 seasons. It gives a clear vision of a possible future of the FRC control system where the mandatory safety code is relegated to a device enormously cheaper and simpler than the Rio, and just plugs into the team’s choice of processor or network of processors.

I’m very proud of you guys. Being continually amazed by your work is something that helps keep me in this dumb program. Keep it up.


Really awesome job (as always with y’all)!

I really like the emphasis on clean namespace design and good git practices. Y’all have clearly developed a really robust software engineering process. Very interested to see how your work with AWS comes as well–making it easier for programmers to simulate and test their code outside of a meeting would greatly improve programming quality for many teams.

Thanks. Some of these things were unexpected, or at least not originally planned.

Since one of the benefits of ROS is forcing modularity, and along with it, relatively well defined interfaces it made it easier to get more students working on various parts of the robot in parallel. And this created the problem where lots of hands were touching the code at the same time. So we looked to industry-based (industry-lite?) practices on how to handle that, and here we are.

As always, there’s room to grow - namely I want to get students more involved in the code review and approval side of pull requests. But it is a good start, and like everything else in software, version 2 stands to be better than the first release. That goes for software processes as well as the software itself.

im in the process of trying to get my team to use a jetson as a coprocessor, do you have any particular pros and cons of it?

Amazing paper btw

The Jetson is like any sensor or control component that isn’t built specifically for FRC - it takes effort to make it usable in FRC. If you are willing to put in the time, energy, and effort to make the Jetson work for you then the results can be very rewarding. If you want it to be a Limelight-like experience (and there is absolutely nothing wrong with the Limelight) then you might not get what you are after.

That being said, we want the Jetson to be used by as many teams as possible and we’ve written a lot about our experiences using it going back to the TK1 days so check out the http://team900.org/labs website where you can find all of those. We’re always happy to answer questions too via our support@team900.org email.

Good stuff, I’m thrilled to see the progress you guys have made. About CAN bus specifically, how did you go about standardizing your packets? Did you create your own structure to get the RIO and Jetson talking through CAN or did you adhere to WPIlib’s whole structure?

Ah, I think there might be some confusion here … or I’m misreading the question.

The Jetson and Rio are connected via ethernet. We use the standard ROS protocols for communicating between them. That happens automatically from the ROS libraries. The Rio and Jetson don’t communicate directly between each other over can.

The CAN communication is sent to normal CAN-controlled FRC devices - and PDP, PCM, motor controllers. Since we’re talking to standard FRC equipment, we’re forced to use the normal FRC CAN protocol for each. The fortunate thing for us here was that, aside from it being legal, we had access to already-written code to make that happen. For CTRE motor controllers (talons in our case) CTRE themselves provided comms libraries built for the Jetson. For the other devices, we were able to use the WPIlib code to format the packets, and the CTRE code to actually send & receive.

Hopefully that answers the question - if not, let us know.

This is the key. This also means we have access to capabilities that ROS provides, like a small flotilla of command line tools and the ability to spin up code on just about any device. This is why we feel that a fully integrated ROS implementation is necessary. Without it, you can’t take full advantage of what ROS offers and you’re just using it as an overly complicated message protocol when the ecosystem it provides is much more than that.

My favorite anecdote from this past season involved me asking a student to drive the robot out of a blocked pathway in the lab because we were moving some carts around. I thought, mistakenly, that the student was the one with the DS and could run the robot out of the way quickly. They ended up calling out to two other students, one or both of whom had multiple laptops near them. They coordinated spinning up the correct nodes on at least two laptops, the Jetson, and the RoboRIO, along with enabling the DS and then they drove the robot the two feet out of the way.

Someone is going to think that sounds terrifying (Not to mention, overly complicated for moving a robot two feet!) and maybe it does to them but to us, it’s a clear demonstration of the highly distributed nature of ROS and how we’ve distributed our code development to more participants.

1 Like

Have you investigated the pros/cons of using a Jetson Nano instead of a TX2?

It sounds like you are using most of the TX2’s resources as is, so the TX1 based Nano might not work without changes.
However, for us the form factor and price was enough for us to get one and start looking at how it could affect our future development pipeline.

We have a few Nanos in house for testing. We couldn’t use them last season - the version of ROS we used ran on Ubuntu 16.04 and the Nano OS is 18.04. We’re upgrading for this year to the latest ROS long term release which runs on 18.04, which will give us a chance to try them out this fall.

But yeah, the size and cost makes them really interesting

Ah, makes sense. I might have to snag myself a CANable now :thinking: