ZebROS 1.1 2019 WhitePaper

In 2018, we wrote a comprehensive whitepaper explaining our groundbreaking work to introduce ROS to FRC. This year, we learned from last year’s mistakes and challenges to write better code: code that was effectively organized for automation and took advantage of more of what ROS has to offer. We also made the huge step of transfering some CAN reads and writes to our NVIDIA Jetson TX2, requiring the setup of a second hardware interface. This whitepaper covers the biggest improvements that we made this year.

We also spoke at ROSCon, an international ROS conference in Madrid, and gave a presentation on ROS for FRC at the FIRST Championship Houston event. More details on ROSCon can be found here, and the Houston video will be published eventually.

As for our current projects and future work, we’ve started a partnership with AWS RoboMaker to test their software for FRC use with ROS. We’ve been developing more accurate odometry, more resilient path planning, and smarter code automation. We’re also excited to work on some analysis of the data we record every match, hopefully making it easier to identify and debug problems in hardware and software.

Thank you to the programming mentors who supported us during our antics this year and were invaluable to our improvements. Thank you to FIRST, whose rule change made it possible to run hardware code off of the Jetson for the first time. And thank you to the NCSSM Foundation, CTRE, NVIDIA, Kauai Labs, WPI, VMWare, GitHub, and many others, who provided us with sensors, supplies, and support this year!

You can find our released code for the 2019 build season here.

The paper is available in the Labs section of our website here : https://team900.org/blog/ZebROS1.1/.

21 Likes

As always, I am very proud of all of you and the work you do. Thank you!

8 Likes

Wow, this is so great!

This is awesome. I love the use of the Canable. Any issues with it?

My next question is purely out of curiosity as there are clearly some very gifted students working on this project - how would you control an FRC robot if you could make the next control system? Would you use a RoboRio at all, or would everything be from the TX2?

We saw two issues with the Canable that are notable. One was a device boot/start issue that we can’t fully explain where the device would enumerate on the USB bus but the driver wouldn’t start. This was easy to spot with the canable as the LEDs didn’t do their thing and it was easy to fix by unplugging and replugging it or just restarting everything. However, this meant we needed to keep the canable external and that is what caused the second problem… The USB connector on the canable is fragile and can break off.

We’re fairly certain the initial issue was related to the kernel we were on but can’t conclusively say that.

As for the other part of your question - stay tuned.

2 Likes

Follow up - did your vision processing ever bottleneck and delay the motor control? And was the ZED doing odometry throughout?

No bottlenecks that we are aware of. We have toyed with using the Zed for visual odometry but that was not in use this past season.

Nice thing about ROS is that everything is broken up into separate processes, so the vision stuff ran as fast as it could in its own process while the rest of the code kept going in parallel. Luckily a lot of the ZED processing is in the GPU, but even with the CPU component it never really got in the way of the main control loop. One of the benefits of being forced to break up the code into modular components. A 6-core 2GHz CPU doesn’t hurt either.

Oh, and VO would likely have been handicapped by us turning down the exposure and gain to properly target the retroreflective tape. Hard to get features to track when the entire image is blank except for the retro-tape in bright green.

2 Likes

Great paper, it’s clearly more than just an incremental update on the papers from the last 2 seasons. It gives a clear vision of a possible future of the FRC control system where the mandatory safety code is relegated to a device enormously cheaper and simpler than the Rio, and just plugs into the team’s choice of processor or network of processors.

I’m very proud of you guys. Being continually amazed by your work is something that helps keep me in this dumb program. Keep it up.

4 Likes

Really awesome job (as always with y’all)!

I really like the emphasis on clean namespace design and good git practices. Y’all have clearly developed a really robust software engineering process. Very interested to see how your work with AWS comes as well–making it easier for programmers to simulate and test their code outside of a meeting would greatly improve programming quality for many teams.

Thanks. Some of these things were unexpected, or at least not originally planned.

Since one of the benefits of ROS is forcing modularity, and along with it, relatively well defined interfaces it made it easier to get more students working on various parts of the robot in parallel. And this created the problem where lots of hands were touching the code at the same time. So we looked to industry-based (industry-lite?) practices on how to handle that, and here we are.

As always, there’s room to grow - namely I want to get students more involved in the code review and approval side of pull requests. But it is a good start, and like everything else in software, version 2 stands to be better than the first release. That goes for software processes as well as the software itself.

im in the process of trying to get my team to use a jetson as a coprocessor, do you have any particular pros and cons of it?

Amazing paper btw