What are some of the ways you guys test your FRC robot code? Do you use a simulation or wait until the robot is finished being built? Is there any good tools out there to test your FRC robot code so we don’t have to test our robot two days before bag and tag? We program in Java.
There are a few tips that come to mind.
First, don’t worry too much about this for the upcoming season, since there is no Stop Build Day. You’ll have until your first event to work with your competition robot, test, practice, make code changes, etc. So, fortunately not the high-pressure situation it was in he past.
Second, there are ways to test code as you move along. Be willing to set up test boards for various processes on the robot that need to be programmed. That means having a setup electronics board to spare (and also using the real one, if it’s not installed yet) to run code and see what happens. This has the added benefit of keeping the electronics team busy (as they often are not, in the middle of the build period.)
Third, we have found that it’s generally better to try code on actual hardware as much as possible. It’s one of the arguments for building a practice bot, even without bag-and-tag in effect. It’s certainly the reason that we make (and recycle) test boards and mechanism prototypes. It lets us build code alongside the mechanisms of the robot, both making the code more refined to task and giving the programmers more time, something they can always use.
Fourth, don’t be afraid to keep and reference past code (be sure to publish it, though.) Especially with Java (we use it too) there’s never a good excuse for reinventing the wheel. About 7 or 8 times out of 10, you’ll find code written for more-or-less what you need and can adapt it easily. With many devices, it’s the manufacturers own code that will get you far (think Talon SRX, for instance.) Never be afraid to go looking for a solution in favor of trying to hash it out yourself from scratch.
Hope all this helps.
There are several things that teams can do to test robot code before gaining access to the competition robot often only a few days before competition:
Develop a library over several years. We began to develop a library with reusable code for all of our robots. Through the 2018 offseason, we added code for common autonomous tasks such as trajectory tracking, etc.
Test your code on a previous year’s bot or test chassis. When the 2019 game was released, we used our library features to prototype some trajectories on a test chassis that was laying around. Seeing that pure trajectory tracking might not be accurate enough, we started to develop a system to follow trajectories with correction based on Vision data. All of this code was tested on our 2018 competition robot – we strapped a camera and ring light on last year’s elevator.
If you are developing control algorithms for any subsystem, try to write out the implementation in a separate non-robot project. This will allow you to debug and test the algorithm offline before you deploy it onto the robot. You can see how the system will respond to a particular set of inputs and will often find bugs in your implementation before causing any physical damage. Remember, physical damage means that the build team will take up more of your time!
Hopefully some of these tips will help you develop and test software sooner in the season!
We use a test bench and usually just connect one or two motors at a time to test different functions. You can also use the debug function in VScode, which is very useful in knowing whether code is functional or not.
This offseason we started to use SnobotSim and unit testing to verify code before we get it on a bot. Before that, we only did what some other people have mentioned: used test boards. Test boards (or even a an old robot if you’ve got the electronics) are absolutely worth the investment for your programming team, but if you haven’t got the resources to make that happen, doing all your testing purely in software is better waiting until the last second with the competition bot before the bag (2020 nobag, I know)
This is the biggest key to unit testing in my opinion, but it takes a bit of practice to make it natural and not seem going out of the way. The more code you write and optimize it gets easier and more natural to abstract; practice makes perfect.
Don’t underestimate the usefulness of dev-board testing. Make a temporary setup zip-tied to a piece of plywood and run motors, check sensors, etc.
This especially applies to things like sensors you haven’t used before but are considering for the upcoming season. Write the code that interfaces with that sensor in the way you want it and keep it for the season.
For simply checking if your code will run (Null pointer exceptions and un-initalized classes), we cross compile to x86 and run our code on our computers. Last year, I had a CI pipeline that automatically did this for every branch on GitHub, and every pull request. This year, we will also be using the “simulate robot code on desktop” feature of GradleRIO.
On top of that, (this doesnt affect the code) we have a CI pipeline to automatically build and publish documentation, and a CI pipeline to check for style issues (like someone not following our styleguide, or excessive whitespace)