Quote:
Originally Posted by kylestach1678
My bad. You're right - downgrading to 0.2.2b fixed it. I suppose that's the downside to using a tool like bazel where breaking changes occur every few releases.
|
And that's why, for work, we upstreamed
https://bazel-review.googlesource.com/#/c/2620/ . On my list of things to do is to use that for 971. That way, the version of bazel that is used by a specific git commit is versioned with the rest of the code. That makes it such that you get to pick when you want to upgrade. Bazel is all about trying to version all your dependencies (compilers, code, libraries, etc) such that everyone will get a bitwise identical result. Google does a very similar thing internally.
For others following along, Bazel builds a namespace sandbox for each build step, only providing the dependencies that were specified for each step. This makes it very hard (though not impossible) to write build rules which aren't deterministic and repeatable. I've built code on one laptop, deployed it with rsync, re-built on another laptop, and rsync told me there was no work to be done.
Quote:
Originally Posted by kylestach1678
Now that I've gotten to take a better look at the code, I'm amazed (as usual) at the sheer amount of testing code present. 1335 lines of code in superstructure_lib_test.cc  ! How are you able to get people to actually write automated test code? We've been trying to make unit testing a more important part of our development process, but this past year we weren't able to make that happen as much as we would have liked for two main reasons: a lack of programmer experience with tests and a general attitude that they are a chore to write. How do you go about training new programmers to write tests?
Thanks for open-sourcing all of this so quickly - I probably spend more time looking over 971's code every year than I should, but there's just so much to learn from it.
|
1335 means we've gotten better at being more concise

The real measure is number of test cases, of which I think there are 29 in that file. That's more than we've had before. We write what I would call more integration tests than unit tests, but I've never really been happy with any definitions I've come across.
We are really big on testing as a team, and I think that helps. I view the season as a success if I can get the students to believe that testing is and should be a required part of development. When new students come in, the older students and mentors focus on testing. Heck, that's one of the ways we'll get new students to start to contribute. "Please write foo and add a test for it." isn't an unheard of request, or "I think there may be a bug here, can you test it to see?". Everyone writes tests (I probably write more tests than most), so that prevents it from being a new kid thing. We have mentors who have a lot of experience in industry, and help make sure the code is designed to be tested. We also look for tests as part of the code review process. I'm much more willing to merge a commit if it has tests. (There are a small subset of people who have the permission to do the final submission for code that goes on the robot. This helps keep the code quality up and keep the robots functioning.)
A lot of that stems from us building complicated robots requiring complicated software. It is hard, if not impossible, to write complicated software without good testing. A lot of the bugs we've caught in testing would have destroyed our robot when we turned it on. We also start coding week 3, and finish assembling and wiring with only a couple days left in the build season. Good testing lets us start writing code before the hardware is available, and test out all the tricky corner cases that either rarely come up in real life, or are really hard to tell if they are performing correctly. I won't let the robot be enabled until we have tests verifying that all the traditional problem spots in the code work (zeroing, saturation, etc). The end result is pretty close to 100% line coverage, though we don't bother to measure it. This makes testing a required part of our robot code. Our intake this year took 0.35 seconds to move 90 degrees with 1/2 horsepower. If something goes wrong in that code, you can't disable it fast enough to save the robot. That's pretty scary, and really gets you to think twice about code quality.
Honestly, one of my favorite days of the build season is the day when we run the code for the first time. There are normally a couple tuning issues, but nothing really major, and the code tends to "just work". It is a very deliberate process of verifying that sensors all work, then verifying that the motors all spin the right direction, calibrating all the hard stops, soft stops, and zero points, and then slowly bringing the subsystems up with increasing power limits. I like to make sure people come for the robot bringup, since it really shows off the power of good testing.
My experience is that resistance/hesitation to test code comes from it both being hard to do, and something that isn't strictly required for small projects. If you want to write good tests, you need to design your code so that you can do dependency injection where needed, and expose the required state. This takes explicit design work up front. The problem is that for small projects, testing slows you down. You can run the 3 tests manually faster than you could write the test code. As the project size and complexity grows, testing starts to pay back off. FRC robots are on the edge of requiring tests.
Integrating good testing into your team is as much a knowledge problem as a cultural problem. There's a classic story where a major web company was unable to make a release of their product for 6 months because they weren't sure that if they released it, it would work. They couldn't risk it. They fixed that by hiring people to turn the culture around, training everyone on how to test and the importance of testing. You need to have your mentors and leaders buy in to automated testing, and drive it. They need to lead by example. Take your code from this year, and figure out how to modify it to be testable, and then go write some tests for it.