2015 Code Release for 1540 & Our experiences with Code Review.

Team 1540’s source code, which ran on Quasar and our prototype bot Helios, is available on GitHub!

We ran our robot software department on a code-review setup: everything got developed in a separate branch, pull-requested, and code-reviewed before merge. You can see the log of this on the project page. 107 pull requests!

This ended up working really well for us - it allowed us to edit each other’s code to be higher quality in a way that gave everyone rapid feedback and allowed team members to become much better programmers over the course of the season.

Once the main season was over, it proved slightly less helpful as we made changes at events - we would have to go through everything changed after each event - but it did at least help us keep most transient code from the competitions out of the repository.

We had a robot software team of four, including myself. Using code review might work differently for a department of a different size.

You can take a look at an example chain of code-review, which was for the initial version of mecanum code for our robot. (Make sure to hit Show Outdated Diff wherever it shows up to see old comments made during the process.) You can see how many times we went back and forth with suggested changes and implementing them - if we hadn’t used a code review setup, the improvements, if made at all, would have been made by someone besides the author of the code, so the author wouldn’t have gotten a chance to learn.

Of course, sometimes the code-reviewer is wrong too, in which case they also learn something.

Any questions on our code or on our development process?

(Also, for reference, our code uses the Common Chicken Runtime Engine, our dataflow-based robot code framework, which should explain some of the nonstandard coding.)

I know it’s kind of late but, did your auto work well? Was it easy to program a new auto mode? What were the major challenges did you have programming your auto?

Our autonomous systems worked well: modes were easy to write. They took a while to tune, of course, but we’ve found that to be true regardless of how we’ve structured our autonomous code. We were able to make some parts of autonomous easier to tune by publishing values over Cluck, so that helped.

Take a look at this example auto mode that we used a lot: it picks up a can and turns to the driver station wall to be ready for a noodle to be loaded into the can. It should be relatively self-explanatory, except for some project-specific things: ‘waitUntilNot(Clamp.waitingForAutoCalibration);’ makes sure that it doesn’t trip over the part of the code that zeros the encoders at the right position and ‘startSetClampHeight’ is the same as ‘setClampHeight’, but it doesn’t wait for the completion of the action.

For most of our modes, we didn’t have many challenges. We realized at some point that any stability offered by using PID in auto was far outweighed by the utility of more consistent movements, but that was easy to change. Besides that, we had issues getting enough accuracy while driving fast, which meant that our three-tote auto mode never got quite under the time limit. (Ignoring the time limit, it was actually relatively easy to write! And the issues with driving fast were partially caused by using mecanum wheels and fabrication variances.)

EDIT: one other thing that helped: multi-layered autonomous modes. Some of our modes would pick up totes. To do this, they would start the pseudo-autonomous mode for autoloading (which was also used by the drivers), which would, in turn, also start the pseudo-autonomous mode for automatically controlling the elevator stacking sequence (also available to the drivers separately.)