We continue to use the Robot Operating System (ROS) as the foundation for all of our robot code, for the seventh year in a row. During the 2023 season, we began to harness ROS in a way we hadn’t before to gain a competitive advantage through streamlined code and ROS-based autonomous actions. We also had an extremely reliable robot; code ran successfully in 98% of our matches. Our work with ROS won us three Innovation in Control awards, giving us district points needed to qualify for the World Championship. In this paper, we cover the highlights of our work and what we’ve learned over the past few years, as well as the details of our 2023 robot’s software.
As always, we would be happy to answer any questions people might have, feel free to ask here or email at [email protected].
As always, I am extraordinarily proud of the work that the authors and all of the contributors made to get this paper out. It’s a huge effort and it genuinely is humbling for me when we get this stuff done and released.
We are looking at it but right now, it would take a herculean effort to make the switch and our current methods for running ROS on the RIO have hit a wall with ROS2 due to changes in the build method and limitations of the kernel on the RIO.
Our other option would be adopting a bridged architecture similar to how many other teams are trying to bridge ROS control to the RIO and we aren’t willing to do that.
We are indeed but no exemption is needed for this. Nothing we are doing violates the safety mechanisms onboard the RIO and built into the driver station software provided by NI. In fact, a lot of what we do at that layer is using work from CTRE and wpilib to ensure it works the same as it does on other robots - just with ROS controlling it. Doing it a bit of disservice but ROS is effectively just middleware.
We did too… it also helped our driver to perform better!
My experience is limited to ROS Kinetic on a Pi4 but typically I see times of 2 seconds or more to start each node using stock roslaunch. A reboot mid match seems deadly and booting up before a match seems like it would take longer than what some FTAs want to allow.
This is all very cool. Students - you are doing some extremely impressive things. I’m not really sure how to make sure you understand that.
One notable exception is the USB controller on one of our NVIDIA Jetsons. We had an issue that occurred a few times where the USB interface would spontaneously go down, taking our CTRE CANivore with it and resulting in a loss of motor control.
It wouldn’t be Linux without USB issues
We plan to explore the best to use Foxglove Studio during the offseason/preseason.
I amAndyMark interested in hearing your experience with Foxglove Studio compared to rviz/rqt. I’ve been very impressed with it thus far as a newbie to the ROS ecosystem (at least compared to rviz).
Have you spent any time investigating advanced motion planning with libraries such as Pinocchio?
PlotJuggler is another much more capable (than rqt_plot) plotting tool for ROS. rqt_plot will make you want to pull your hair out. The discovery of PlotJuggler was life changing. I’ve only recently used FoxGlove Studio because it runs on Mac while PlotJugger doesn’t.
FoxGlove has many more features because it can handle other data formats, images, diagnostics, etc where PlotJuggler only support plots. FloxGlove is growing on me but I find I’m more efficient at reviewing bag files in PlotJuggler. YMMV
Don’t have the robot in front of me right now, but running our simulation environment shows 55 nodes. That doesn’t include the transient stuff at startup, but does include a number of resource monitoring nodes. The robot might be a bit higher (some of the sensor nodes die with no sensors attached) but probably not far off that number.
Have you done any work to speed up roslaunch?
No, we haven’t seen a huge issue with it. There’s a bit of a delay initially launching the first remote node on the Rio, but the jetson is fast enough that launching code locally doesn’t take that long. We do have some startup delay as the code initializes all motors to a known state, but we’re planning to take advantage of the CTREv6 api to batch configure rather than calling individual config functions.
We do a lot of work to avoid reboots, since aside from what you mention having e.g. the jetson go down while the rio stays up or vice versa would likely just totally break the robot code. So we spend a good bit of time getting wiring correct, putting electronics where they can’t be hit, and so on to try and sidestep the problem.
And startup time is something we’d like to improve, but it seems like more often than not we’re waiting for the field to connect. Our drive team is pretty efficient setting the robot up and powering it on, which might help a bit there.
Have you spent any time investigating advanced motion planning with libraries such as Pinocchio ?
I’ve made it clear that if the mechanical team proposes building anything that remotely looks like a person, I’m quitting . Semi-serious, though, I’m not sure how well the types of robots it is best for overlaps with the types of robots that win FRCFIRST Robotics Competition competitions.
We did borrow the CUDA apriltag implementation from ISAAC code, unfortunately it only works on 36h11 tags so we’llLimelight, an integrated vision coprocessor have to wait until this year to see how well it works on the field.
Haven’t spent much time with either of the sim environments, though. My worry is the work needed to get it running won’t pay off over using the really simple simulation we currently have (and the time it might take from the CAD people to get us useful models early and often during build season vs. letting them build us a real robot quicker).
Just to address bootup times on the field, we normally had 2-3 minutes from power on to get robot code, so definitely on the slower side. We make sure to power on right we when get on the field to make sure we don’t slow a match down.
We have had some instances where we needed to restart or turned the robot on late where our slow startup was a bit of a pain, but not enough for a FTA to talk to us about.
Oddly, my best guess is that this isn’t a USB issue explicitly but a power issue because of harsh shock/vibration conditions but we haven’t been able to prove it. Basically the power dropped momentarily from the USB root and then it doesn’t come back up cleanly.
I’ve dug into ISAAC a bit. It’s usable but the interfaces outbound to our code (really, ROS in general) are not simple yet. ORBIT looks like maybe it is trying to solve some of that so we’llLimelight, an integrated vision coprocessor have to keep an eye on it. I do want us to get back to using Unreal for physics simulation because it’s really good at it but we’ve got more work to do first.
I spent the better part of yesterday digging into roslaunch for my own purposes and discovered one spot that is very slow, on a Pi4 at least. I figured it might be applicable on the Rio and Jetson as well.
When it spawns each node it closes all file descriptors. Based on my research it can’t determine which file descriptors are in use so it loops through and closes all of them. In my case SC_OPEN_MAX was set to 1,048,576. So for each node it started it had to iterate through and try to close `1,048,5761 file descriptors.
Passing in False for close_fds took the time for a node to launch from 2-3 seconds down to fractions of a second. Given my 40 nodes this was a significant savings.
I’m still testing and trying to figure out what the implications of this are. I think it mostly comes down to security and leaking file descriptors. Not sure how important that is… If anyone knows I’m all ears. I’m also going to simply try reducing SC_OPEN_MAX significantly and leave close_fds set to True.
It is intentionally being done for Linux but not for Windows (lines 332 - 337) so without fully understanding the implications and in the absence of comments explaining why it is done. My gut says they are doing it for a good reason. If it is purely for security then that obviously doesn’t matter on an FRC robot and the performance option is preferred.
Wooohoo!
Another year, another code release from the future by the Zebracorns!
It’s truly inspiring how much effort and work you guys put in this in chase of your vision(pun unintended), your work is appreciated!
Have you guys ever implemented a ROS to Networktables bridge in the past? We’re considering using ROS for CV next year, but would rather stick to the provided WPILib/NI images/setup procedure for now.
We don’t have a node for interfacing to Network Tables but our friends over on 195 are also doing amazing things with ROS and have something: GitHub - frcteam195/network_tables_node