Hi Connor,
There are 2 workflows worth talking about. There is our standard development workflow during the season, and there is the between match emergency workflow.
For the standard workflow, we follow a pretty standard process. We do basic in-person design reviews before people start. This is pretty light weight and mostly involves having the person discuss what they are going to do and make sure we are on the same page. For our overall superstructure design this year, this involved whiteboard sketches and a design proposal, which really helped get the team all moving in the same direction.
Once that happens, we do test driven development. We have gerrit and jenkins setup to do code reviews and run our continuous integration tests. These include full simulations of every subsystem. We maintain support for the all the robots that exist in the shop by continuing to run the tests for each robot and keep them up to date in our monorepo.
Once this finishes, we start testing on a real robot. From there, it is all about trying to make sure we’ve seen all the cases. We don’t have great tests for the joystick interface and autonomous, so these have to get developed and tested on the real bot. It isn’t too hard to build up a mental code coverage model and make sure you’ve seen all the code paths run. For autonomous, this means 10 perfect runs in a row, minimum.
We haven’t set up any official project management tools. My personal preferences for projects this small is to keep it all in people’s heads, and maybe keep a small spreadsheet. The project lead checks in daily or every other day with everyone, keeps track of the critical path, and re-allocate resources as needed. Bug tracking works the same way. The critical bugs get fixed as they get found, and we only really can remember the top 10 bugs. Anything else wasn’t important enough. Some stuff slips through the cracks, but it largely works, and doesn’t consume a bunch of time. This only works if you have good people who work well together and a simple project.
At competition, we have to short circuit a lot of this because everything needs to move fast.
We don’t have much time between matches to make changes. Things tend to start with a small risk assessment. We pretty much are asking the question of what could go wrong with a change, and what will happen if we don’t make the change. This helps inform us of how thorough to be or if we should hold off until we have the time to properly test it. If it is a simple calibration change, we’ll just do it, confirm with logs that the cal change took, and go. If it is a logic change, we have to think it through more. We’ll likely do an informal code review on the spot or pair program it, and then either run the unit tests locally (rarely), or do a functional test to confirm. We won’t push code and run in a match without a functional test at a minimum.