POBots 2024 Software
Man, it has been a while since I’ve written anything about our code from last year.
This will be a 2 am braindump write-up of most of our 2024 software.
Our Goals going into 2024
2023 was some of our best programming in a very long time, it was only our 3rd robot being programmed in Java, and until this time we had only scratched the surface of what can be done with FRC programming. The Charged Up season was the first season where we consistently scored multiple pieces in autonomous, and had very little emergency hot fix debugging in the pit between matches.
Of course, we were nowhere near the highest level we could reach.
Swerve
We built a swerve chassis, obviously we have to program it… I know, crazy
Fast Mechanisms
When watching the Einstein of 2023 and watching the Behind the Bumpers videos afterward, I realized something about the software of all of the top robots. They had almost no automation, they just did everything very fast. To make a lot of cycles, everything that involves movement on the robot has to be done as fast as possible. Once everything was extremely fast, then we could look into automation (which is what we did, more on that later).
Autonomous
After 2023 and ending our season with only one functioning auto (which broke towards the end of Hopper), our biggest priority was to be one of the teams that could start an autonomous from several spots, and execute everything very fast.
We attempted to use PathPlanner for our autonomous routines in 2023, however, due to our center of rotation on our 2023 robot (which was a tank chassis), we were not able to get it working consistently.
Logs
So fun fact, if you mention logs to @austinp, he’ll get very excited, it’s like when you hold a tennis ball in front of a dog.
For 2023 we sorta had logs, we used the Shuffleboard record feature to capture all NetworkTables data during a match. This was surprisingly useful and we were able to get a good amount out of it. For reasons I don’t want to get into, this was a horrible way of logging. For 2024, we needed DataLogs, ones that could be used with AdvantageScope.
Pre-Season 2024
Aside from training the new programmers, pre-season programming was almost entirely dedicated to swerve. I also started running some tests of Trapezoid Profiling on the elevator of our 2023 robot, it didn’t quite work which was due to a math error.
One other feature that was under experimentation was AprilTag vision tracking to fuse with odometry. We were using odometry during 2023 mainly as a way of testing whether or not we could make use of it. Since we weren’t using any sort of path following it wasn’t really used.
Offseason 2023 (Half Hollow Hills Invitational)
The Half Hollow Hills Invitational (HHH) was the test of the 2 biggest additions we wanted to add: AprilTag tracking and data logging. Both of these were successful to an extent. Data logs working incredibly, and it was really cool to be able to visualize everything after a match.
Vision tracking didn’t work so well, when it saw an AprilTag, the measurements would be pretty accurate, but the moment it stopped seeing it, the robot would suddenly jump a few hundred meters off the field. I’m impressed that we were able to make a robot drive faster than a Formula 1 car with DRS.
After further research and some help from @anivanchen from Team 694 (StuyPulse), the most probable cause for the jumping was setting our standard deviations too high. I also later learned that we should have been prioritizing accurate readings instead of high framerate, our resolution was too low.
Day 1: 2024 Build Season
The first day of build season was something that I wanted to change. Normally it’s just the boring stuff of updating everything, setting up the git repository, porting over old code, all that stuff. Instead, this whole process was streamlined. All old code and git repository setup was done over the weekend before we had access to the robot. All firmware updating I quickly did during the school day so we could get right into testing new features.
On the first day of build season, we got dynamic path generation with PathPlanner working, and it’s also the only time we ever ran it since we didn’t have much of a need for it.
The Writeup
If you scrolled all the way down this single post megathread brain dump here’s probably what you were looking for. Here are all the highlights of our 2024 software.
Auto Code Formatting
The most simple change that made such a massive difference. Spotless formatting.
We quite literally just copied the gradle config from here: Using a Code Formatter — FIRST Robotics Competition documentation
Then add project.compileJava.dependsOn(spotlessApply)
Our code looks beautiful now.
Automated Pre-Match Checks
The pit was an area where programming could have sped many parts up. During our pre-match checks, there would be a lot of repetition, “Drive forward”, “Drive Left”, “Move the elevator up”, etc. Instead of doing all of this manually, we wrote a class that allows for fake joystick inputs. 2024RobotCode/src/main/java/frc/lib/controllers/VirtualXboxController.java at main · POBots-353/2024RobotCode · GitHub
I wouldn’t recommend faking the joystick inputs, since it’s a very dangerous API to be messing with, and it also slows down your code a decent amount. I had to add a bunch of optimizations to reduce loop overruns.
This was used with a customized subsystem that allows it to build sequential commands to execute what is essentially unit tests but on the robot. 2024RobotCode/src/main/java/frc/lib/subsystem/VirtualSubsystem.java at main · POBots-353/2024RobotCode · GitHub
The Alerts API was used to send information to the dashboard regarding the status of the system test.
This idea was inspired partially from Team 3015.
Autonomous
For autonomous, we used Choreo to plan our autonomous paths. This allowed us to score more notes then we ever could have imagined. We then used the Choero event markers to run commands while driving in the paths.
There’s not much else to it, the actual routines we were running came more down to strategy, which isn’t entirely in the scope of this post.
AprilTag Vision Tracking
After the relatively poor results of the HHH invitational, we were careful in how we were processing vision measurements. One lesson we learned was that one camera was just not enough, we needed 2. The first was a Limelight 3, and the second was an Arducam OV9281, powered by a Khadas Vim4. Both of these coprocessors were running PhotonVision, although up until champs we were running Limelight OS on the LL3.
There were 2 steps to processing vision inputs, validation, and calculation.
Validation
First, is the robot in autonomous mode? If so, reject.
Is the average distance to each of your tags greater than a certain distance? If so, reject
Does the vision measurement put you far below the ground or high in the air? If yes, reject
Does the vision measurement put you off the field? If so, then reject
Now the more complicated rejection is the comparison of rotations. While updating our odometry we kept a record of gyro measurements in a TimeInterpolableBuffer
, which essentially allows us to look up angle measurements from the past. We took the timestamp of the vision measurement and sampled the gyro angle from that time. If the rotation of the vision reading was more than a certain angle off from gyro angle, then reject it. We were a bit more lenient if there were multiple tags in view since the vision readings were a lot more accurate.
Here’s the logic for this step: 2024RobotCode/src/main/java/frc/robot/subsystems/Swerve.java at main · POBots-353/2024RobotCode · GitHub
Calculation
The next step after validating the vision reading was applying it to the drive kalman filter. Our standard deviation was determined by how many tags were in view, and our distance to them.
If more than 1 tag was in view, then the standard deviation was calculated by 2.5 inches multiplied by the average distance squared over 30.
If only 1 tag was in view, then we used polynomial regression to predict our standard deviation based on our distance to the tag. These standard deviation measurements were calculated by recording vision measurements for about 30 seconds, then seeing the standard deviation of x, y, and theta in AdvantageScope’s statistics tab.
Here’s the logic for this step: 2024RobotCode/src/main/java/frc/robot/subsystems/Swerve.java at main · POBots-353/2024RobotCode · GitHub
Our pose estimation worked great. The only thing that could have helped it more was additional cameras.
Logs
Log log log log, I want to log a room robot. A lot of logging libraries and techniques were used to allow effective debugging in the pit.
Firstly, we enabled DataLogManager to log both NetworkTables data and DriverStation joystick data, which was a must. Next, we added Monologue. For structs, Monologue was used to log these easily without using the raw NetworkTables API.
One other important aspect was the addition of subtables on NetworkTables. Instead of having a bunch of entries all over the place like we’ve had in past years. Everything was organized by subsystem.
Here’s an example of the updateTelemetry()
method in our swerve module class:
private void updateTelemetry() {
String telemetryKey = "Swerve/" + moduleName + "/";
SmartDashboard.putNumber(telemetryKey + "Position", getPosition());
SmartDashboard.putNumber(telemetryKey + "Velocity", getVelocity());
SmartDashboard.putNumber(telemetryKey + "Angle", getAngle().getDegrees());
SmartDashboard.putNumber(telemetryKey + "Absolute Angle", getAbsoluteAngle().getDegrees());
SmartDashboard.putNumber(telemetryKey + "Desired Velocity", desiredState.speedMetersPerSecond);
SmartDashboard.putNumber(telemetryKey + "Desired Angle", desiredState.angle.getDegrees());
SmartDashboard.putNumber(
telemetryKey + "Velocity Error", desiredState.speedMetersPerSecond - getVelocity());
SmartDashboard.putNumber(
telemetryKey + "Angle Error", desiredState.angle.minus(getAngle()).getDegrees());
SmartDashboard.putNumber(telemetryKey + "Drive Temperature", driveMotor.getMotorTemperature());
SmartDashboard.putNumber(telemetryKey + "Turn Temperature", turnMotor.getMotorTemperature());
SmartDashboard.putNumber(telemetryKey + "Drive Applied Output", driveMotor.getAppliedOutput());
SmartDashboard.putNumber(telemetryKey + "Turn Applied Output", turnMotor.getAppliedOutput());
SmartDashboard.putNumber(telemetryKey + "Drive Output Current", driveMotor.getOutputCurrent());
SmartDashboard.putNumber(telemetryKey + "Turn Output Current", turnMotor.getOutputCurrent());
SmartDashboard.putBoolean(telemetryKey + "Open Loop", isOpenLoop);
SmartDashboard.putBoolean(telemetryKey + "Allow Turn in Place", allowTurnInPlace);
SmartDashboard.putBoolean(telemetryKey + "Characterizing", characterizing);
SmartDashboard.putNumber(telemetryKey + "Characterization Volts", characterizationVolts);
}
The use of subtables allowed for easy navigation on both the dashboard, debugging, and logs.
There were several instances where logs were used after a match to troubleshoot issues. The most useful instance was when we randomly missed a very close shot that we should have made. After looking at the logs, we determined that we needed to measure another data point for that distance. After that, we didn’t miss a shot again (that’s a lie but you get the point).
Debugging
This is less of a specific thing we did and more of how we approached issues. In the past, our debugging has been more of IT support, which is “power cycle the robot a few times”, and “remove everything that we’ve done and see if it works”. While these can be effective in some scenarios, it’s not always. Instead, we took a more “trust your instincts” and “work from the source outwards” to figure out issues.
Simulation
We added simulation. Over February break I got bored and simulated our robot, which is how we were able to test shooting on the move. We were able to make use of the AdvantageScope 3d field to load our robot model and use it to run practice cycles without the robot.
Auto Shooting
As I mentioned earlier, before automating everything, we have to make mechanisms as fast as possible. We accomplished this with a TrapezoidProfile on the arm and a well-tuned feedback system. Once this was solidified, it was time to have some fun.
First, we placed the robot right against the subwoofer and measured the angle. Then, we moved back about 0.25 meters and measured the shot angle. Then, we went back another 0.25 meters and measured again. We did this a few more times until we had a good set of data points to work with. Once we did this we used WPILib’s LinearInterpolation
to approximate the shot angle based on our distance from the speaker.
Here’s the logic for this: 2024RobotCode/src/main/java/frc/robot/commands/arm/AutoShoot.java at main · POBots-353/2024RobotCode · GitHub
Auto Shooting… while moving!
Our shooting on-the-move code might be my favorite code I’ve ever written on an embedded system. The first step was measuring the shot times at several distances from the speaker. After that, it was some basic physics kinematic equations to predict the arm angle and shooter speeds to make the shot.
Here’s the code for it: 2024RobotCode/src/main/java/frc/robot/commands/ShootWhileMoving.java at main · POBots-353/2024RobotCode · GitHub
A lot of this was taken from Team 1706’s 2022 code: Shoot While Move Code (1706)
One other idea tested that didn’t end up working is a “lookahead” shoot on the move. Essentially how it works is instead of continuously tracking an angle that is constantly changing, we look at where the robot will be at a certain period in the future, and then move to that angle. This would in theory allow shots to line up faster and allow for higher speeds. Unfortunately due to some mechanical limitations with our arm, this didn’t work, but it was a really interesting concept to try out.
Here’s the code for the lookahead: 2024RobotCode/src/main/java/frc/robot/commands/LookAheadSOTM.java at main · POBots-353/2024RobotCode · GitHub
Other Small Additions
There are a lot of other small things that don’t deserve an entire section, but I’ll add them here since they were a big part of our software:
- Multi-threaded Odometry (inspired from 6328)
- Automatic Fault Logging Library (written from Team 1155)
- Network Alerts (written by team 6328), which will be added to WPILib in 2025
What I Would Change
2024 was easily the best programming in our team’s history, at the beginning of the year we were expecting to pick up maybe 3 notes in autonomous, not 5 with over 3 seconds to spare.
The only major regret/change is the way we were tuning autonomous paths. We were mostly tuning autonomous paths to work with our robot. There was a lot of small shifting of waypoints and adding max speeds at different spots. While this certainly worked, it was not ideal. We had 3 unturned PID controllers running while path following, a poorly characterized system, and an unturned wheel radius.
If we were to do this season over again, we would have spent more time tuning our robot to follow any path on the first try.
Source Code
Since switching to Java, the POBots have kept their code open-sourced, and is something that is planned to continue in future seasons.
Our full 2024 source code can be found here: GitHub - POBots-353/2024RobotCode: POBots code for the 2024 Crescendo season
My Life as an Alum
I figured since I was introduced as an alum and former head of programming (all of which is true), it would be appropriate to give an update on my whereabouts.
Firstly, I still keep close with the FRC community. I am not currently involved with any teams and don’t plan to be for at least a year, as I need a break from FRC. I have been keeping up with Elastic and am also working on another FRC-related project which I’m very excited about.
I’m happy that I have been able to use my FRC experiences in other fields of robotics. I joined the Mars Rover Team at my University, where every year they build a Mars Rover (in a lot more than 6 weeks) and compete in the University Rover Challenge. Since there’s no standardized set of hardware, nearly everything is custom-made. It’s really interesting to see the lower end of the hardware and what’s happening behind the scenes of your motor controllers and sensors.