Potential Git and CI integration

Many teams in the FIRST Robotics Competition use Git to manage their code… Some teams even test their code on Continuous Integration services such as Travis CI or Jenkins. With the growing popularity of Git in FRC and the good coding practices that come with it, I was wondering if there was any interest in a System that would work with the RoboRIO to manage code versions and make deployment a lot smoother.

The idea is simple, whenever code it committed, the CI service builds the artifacts and tests them. If all tests are successful, the artifacts are uploaded to an external server where, upon update or boot, the RoboRIO will fetch the artifact, load it and restart the program. Alternatively, the required development tools could be loaded onto the RoboRIO system (javac/gradle or the FRC Toolchain) and the RoboRIO acting as a Git Remote to push to. This could potentially streamline the build process and make management much easier to achieve.

For RobotPy, I believe no build system would be required as the sources could be loaded really easily, whereas for Java teams, the FRCUserProgram (or module if running Toast) would be built and copied to a temporary cache to be reloaded on code reboot. C++ teams would require the toolchain to be ported to the RIO.

Before any developments are made, I’d like to ask if there is any interest in the project and if so, what features or concerns you have. A repo has been setup over at OpenRIO and is free for anyone to contribute to, however, feel free to post your thoughts here.

I look forward to the development of this project,
~ Jaci R

1 Like

For RobotPy, no build system is required. We experimented with it this off season, one of the tests was if we could put it under TravisCI easily (and thus get testing status before merging automatically) The code is available https://github.com/FRC125/NUPY and the tests are available at https://travis-ci.org/FRC125/NUPY

The big change to the .travis.yml file was telling it to use the right version of python and install dependencies. The config file is https://github.com/FRC125/NUPY/blob/master/.travis.yml

For an interpreted language such as python, it’s super important to test your code before you upload it to the robot, as even simple syntax errors or referencing a variable that doesn’t exist can put your robot dead in the water. That’s why for years RobotPy has had first-class support for running your robot code without the robot – which makes it super simple to integrate with a online CI system like Travis-CI.

It would be nice if off-robot testing were a first-class concept for the other robot languages, and I think that improving that situation would have a bigger impact on teams. After all, not everyone has an extra RoboRIO lying around to run a CI system on… and really, you shouldn’t be running your robot on internet-connected networks!

Maybe next year after the season we can build up a proposal on how to get this into the official WPILib. I’ve seen firsthand how much a good simulator base helps test code and makes sure it always runs properly. It would be nice to potentially have this be a native library replacing the HAL, like we currently do in the unofficial languages, but I don’t really think FIRST would go for that solution.

1 Like

I know 254 put together a FakeWPILib they were using last year, I’m currently working on getting that up and running locally. Hopefully after that it’ll be simple enough to get it running under CI. I’ll let you know if I make any progress. Obviously this still leaves C++ and LV teams.

I think LV has its own testing stuff you can use… but obviously, there are a lot of challenges running LV + CI on a commercial CI provider.

The Toast project I’ve been working on is designed to do just that, by replacing WPILib’s networking and HAL when run in a simulation environment and formatting it into a GUI for the user to read instead, with hooks for modules to do what they like with the data if they want custom testing support.

In my signature you can see the Toast project is fully supported under CI and metrics regarding documentation and code quality are generated as well. Modules also have full CI testing support thanks to GradleRIO

For C++, 971 puts the hardware interface code in a separate process and uses custom message passing to communicate the hardware state to the control loops, etc. We can then write the hardware interface code once, test it on the hardware, and then move on. We are able to then write unit tests which simulate the hardware and generate messages with simulated sensors and physics. This allows us to do all our controls development before the hardware has been finished.

That sounds overly complicated for the majority of teams. Cool… but it seems most teams are just slightly above the “can reliably hit the compile button” threshold.

But any chance this is published anywhere?


The good stuff is in frc971/control_loops/. It currently only builds on debian wheezy, amd64. One of these days, we’ll finish open-sourcing the 2015 code…

It is probably beyond what the majority of teams can do, but is useful for showing what is possible and helping inspire students to do better. I’ll never claim that our code is ‘simple’ or ‘easy’. :stuck_out_tongue:

If it wasn’t overly complicated for most teams, 971 wouldn’t do it :wink:

For anyone who’s interested in trying to compile this on Debian Jessie or recent Ubuntu releases the first thing you’ll run into is that the libcloog version it’s looking for isn’t available in the repositories.

Probably. We made a conscious decision to support only one OS version to keep it easier to support. We are revamping our build system in the next ~month, and hope to release this year’s code building with the new build system under Jessie. This should also reduce our dependency on the host OS and make the build more repeatable. (If you like to geek out about build systems, check out bazel.io.)

Digging up this thread because I think it’s phenomenal.

am the current FRCSim developer. As FRCSim develops, this should hopefully become much easier to do. Gazebo can be run headless on a server, and since all the communication between WPILib and gazebo is done over Gazebo Transport, a message passing system built off protobuf, it should be way easier to write tests.

How would you guys approach testing with frcsim? Would you write junit tests that calls your robot code? Or would you run your robot code as-is, and write a seperate “supervisor” program that messages the the robot code and listens for the appropriate effect in gazebo? Both seem viable to me.

Testing code in the Robotics realm is a two-way street. Personally, for testing, I think just running the code yourself in a simulator is good enough for FRC (but that’s up to personal opinion).

For a robotics program, this seems like the wrong way to do things. JUnit has it’s place, but I don’t think FRC is it. Unless your code (and your framework) are specifically designed for isolated testing, unit testing is not really suited for a robotics / embedded software environment. The main reason I say this is because of things like Motion Profiling, PID and any kind of Feedback. While, yes, it is possible to test, the tests often don’t reflect actual operation, or do a very poor job of it. That being said, that’s probably just my opinion weighing in.

This seems more viable, but it’s usefulness would depend a lot on what gazebo sends back. If you’re sending back things like what speed the motors are, or if a port is on or off, that’s often quite useless from a testing perspective. Something more useful would be things like displacement of the Robot. Things like that can be compared against ‘what it should be’, and could practically test things like PID loops and motion profiles.

I’m inclined to agree on this one.

I’ll push back on this one. I see no reason why you can’t tune a good robot model such that the same PID parameters work in simulation. It’s a bit of work, but people have done it with the Atlas robot, Ardupilot, and Baxter, which are all far more complicated and dynamic robots.

This I think is gazebo’s strong point. You can get literally anything. You want to test that your auto drives forward 5 meters? Great, just get the world pose before and after. You want to test that your LaunchCatapult command works? Cool, put a ball in and check where it lands, or even just get the pose of the joint. You can also piggy back on any sensors your robot has, and check something things like driveUntilWall, by comparing absolute distance between the robot and a wall, versus what your rangefinder says and where your robot stops.
That said, I still think even testing “is my motor spinning” is a huge improvement over what most teams do (no testing at all).

I think there is real value in testing with a simulator. I don’t think it’s the right tool for typical unit tests, and I probably wouldn’t use it to tune PID constants either; the real value in the simulator is in being able to see all the pieces work together and interact with each other and the environment. Off the top of my head, I might put together an interface that looked something like this:

struct Object_status{
	//Put position, orientation, speed, here
	std::map<std::string,double> etc; //extra in case the object has extra configuration like part that move relative to the others

struct Object_setup{
	Object_status status;
	std::string path_to_model;

struct Robot_setup{
	Object_setup object;
	std::string path_to_executable;

using Name=std::string;

struct Initial_setup{
	std::map<Name,Object_setup> objects;
	std::map<Name,Robot_setup> robots;

using Status=std::map<Name,Object_status>;

typedef bool (*Done_callback)(Status);

Status run_sim(Initial_setup,Done_callback);

Absolutely. This is the huge value of a easy to use simulator, and it allows our programming team to reliably create code before the robot is done and after it’s in the bag.

In python, it’s far more important to do off-robot testing because it’s a dynamic language – there’s a large risk of crashing due to misspelled variables, etc. Because of this we’ve had excellent test/simulation support for several years now. In the course of development for the pyfrc simulator/test harness, we have a set of ‘default’ tests that just do the following things:

  • Run autonomous mode (with support for running multiple modes)
  • Run teleop mode
  • Run a ‘full match’
  • Run a ‘full match’ with a monkey bashing on the controls randomly

And this meets the needs for most of our robot code – eg, make sure it doesn’t crash due to misspelling of a variable name.

However, we’ve found there is some value in junit-style tests – but usually we only go to the trouble of writing those when you’ve got a complex state machine that you want to make sure deals with the edge cases correctly.

JUnit != Unit Testing. JUnit is a testing framework. It does a great job of giving you tools to check that the expected results are correct, organizing test cases and reporting failures. You are welcome to use it only for unit tests, but it can be used for any type of testing where there is a need for it’s features. WPILib uses JUnit running on a roboRIO to make sure PID loops and sensors are all working.

Testing all depends on what you put in it. If you are going to try to mock out a robot subsystem to do unit testing, you are going to have a bad day. Figuring out by hand what a system should return is hard. If you write a simulated subsystem with simulated sensors to support more of an integration test, you can get some very powerful results. You can model almost all FRC subsystems as a motor attached to a mass.

We do pretty exhaustive automated testing of all our control loops and motion profiles, all inside google-test in C++. Last time I counted, we had 20+ tests for our superstructure, and many more tests for things like motion profiles. If you can’t control a model of your robot pretty well, you have no hope of controlling the real thing. Unit tests are also very helpful at helping you understand when smaller units of code are performing as expected. This enabled us to have all the software we wrote before the hardware was done this year up and running on the robot in 24 hours, with no modifications after calibrating pots and measuring limit locations. Our arm and flywheel controllers all worked pretty well and were stable, just from the quality of our simulations. That’s pretty powerful and well worth the investment.

Motion profiles are actually much easier to test than controllers. You can compute ideal trajectories and see if your generator successfully generates and follows them. You can also write tests making sure they have the bounded acceleration characteristics you expect and bounded velocity characteristics.

I believe that FRC should try to teach some of the best practices in industry to our students. Testing is a key part of developing and maintaining production software. I write both unit tests and integration tests in both industry (safety critical vehicle automation) and at robotics, and it drastically helps the code quality in both locations.