I like to think that my team has a couple of “cool” features built into our code (XML Config, Goal Driven Autonomous, etc.). What are some of the coolest programming features you’ve seen integrated into FRC code for competitions?
Per-loop logging of Data Values
A RIO-hosted website displaying status data
Physical switches on a robot for selecting auto mode
File-based configuration data
RIO-MAC-address based configuration data swapping
CAN-controlled arduino driving LED patterns
LEDs used to indicate robot state to driver (so they don’t have to take their eyes off the field to look at a display screen)
Dumping stack traces to disk on crashes for later analysis.
Note we have stolen most of these ideas for our own usage at one time or another.
But of course, all of these pale in comparison to field-relative positioning through non-linear state estimation.
+1, we started doing this my Junior year (Stronghold, at my request) and have never looked back. The ability for the driver to just flip it and never have to worry about a comms issue is amazing. On top of that we have use the MXP to not waste any DIO ports on it!
How do you go about this? I’ve always had troubles with not being able to see how crashes happen…
Onto my own suggestions:
- Customization on input curves. It can be extremely helpful to allow the drivers to feel the robot IMO.
Java: Courtesy our cheesy and poofy friends, see the guts of it in crashtracker.java.
Note also that the same throwable is rethrown after being logged to disk to ensure operation is transparent to the rest of the WPI system, and you do in fact reset the code with the “ROBOTS DON’T QUIT” message.
@Travis_Covington may wish to chime in, lest some stranger poorly explain his team’s code.
Python/C++: There should be an equivalent construct with try/catch
I second most of what @gerthworm listed above (the rest we just haven’t tried yet) and will add:
- An abort-all button for when things go crazy!
- An override button for when sensors fail and prevent drivers from completing an action
- Robot identify flag so code can use different constants for practice vs. competition bots (we’ve used both the resistor-on-DIO approach and the USB thumbdrive approach)
- Failed sensor detection and bypassing (eg, if your right drive encoder fails, ignore it and measure distance only with the left and vice versa).
- Automated health checks (eg: sending power to a motor but not reading any current draw? Something must be wrong–raise an alert!)
- Automated self-tests (checklists are great–but even better when the ROBOT runs through them itself!)
Another +1 here!
In 2016, my old team (5428) was able to deploy code to the robot while it was enabled and driven and have the changes applied while the robot was in motion. The 0 downtime it provided came in very handy during the development cycle of our software.
For an example of a well implemented LED signaling system, look at 254’s robot in 2017.
This isn’t really a feature, but I imagine following the principles of TDD, especially as demonstrated in a video by 971 is quite useful.
LabVIEW doesn’t “crash” in the same way Java / C++ do. The equivalent here would be to do proper error wiring and on error write that to disk.
I hope this was just configuration values or something as this seems dangerous.
- Ramping on motors / driver controls if needed. Driver preference and ideally not needed but I have seen many drivers benefit from the greatly especially when it comes to tippy bots
- A button to slow down the turning speed while held for fine placements vs 180 degree turns
- Current limiting so the robot can’t hurt itself
Speaking of drive controls, 548 passes driver controls through an expo-deadzone function which both does deadzone properly (so that 11% input with a 10% deadzone results in 1.1% output not 11%) then takes that value to a selectable power. The code is somewhat based on this article. The expo helps a lot with fine alignment.
we wouldn’t use it mid match, but it wouldn’t just be config values, it was entire functions and stuff
254’s Scale height detection software from last year. They had a camera connected to their driver station that was pointed at the Scale and constantly sent the height of it to the robot, so their elevator always went to the right height.
It was written in C/C++. It worked by compiling into 2 executables, one of them acting as a hardware layer, the other being purely functional and has no direct access to the hardware layer. The hardware layer would then load the functional layer dynamically, and check against the file date/time to determine when to reload the other executable. Since gradle, compile times are at an all time low, so there is no reason to do this anymore.
Since we had two cameras on our robot (one on the hatch side and one on the cargo side), depending on what the operator does, it switches the camera so you can see what’s happening on the “active” side of the robot. This “active” side of the robot was also used when the driver changes to first person control. So grab a hatch, camera flips to the hatch side. Intake cargo, camera flips to the cargo side.
Our auto mode selector was probably one of the more complicated auto selectors used. The chooser worked reliably but we ended up driving off the big screen instead of auto. Maybe next year it will be more useful. Anyway, the auto chooser had 6 different choosers each somehow affecting the auto mode in a major way. And, I even had to create a custom DynamicSendableChooser class so I could change the options when another chooser was changed.
Also, something useful to the driver was a warning at 7 seconds by using the controller rumble. Yes, you have your coach for that, but being able to feel the count down was very useful so we weren’t on the platform for a long time at the end. Next year we’ll probably have more weight and time to spare to add an LED signalling system for the driver and human players.
We added lights to the operator console and flash them and the robot lights starting at -15s. Drive team thanked us for the impossible-to-miss climb reminder!
We had code that constantly displayed debug information by changing the pattern on our lighting strips. That way, when I got back to the pits after a match, I already knew almost everything that happened (battery levels, connection issues to and from the FMS and DriverStation, Camera statuses, mode changes, and a ton of other stuff)
How many autonomous routines do you have? My team ends up with 30+ every year so how do you manage to organize the switches?
Something that I found extremely helpful this year as driver is to be able to switch robot direction. Bc of our main defense strategy, we had to use the back of our robot to avoid G20 violations (we have an open frame). We had a camera in the front and back, and whenever I pulled a trigger on the joystick, the camera and direction of the robot would switch. (Direction of the robot meaning which side was the front)