Team 581 2024 Offseason Software Projects
This offseason we’ve been hard at work on software projects to improve the reliability of our robot. These are 3 projects that we feel proud of and learned a lot from.
Note Map
This was written by me, one of our software students.
Overview
Note Map is a domain specific language (DSL) that we made to create fully dynamic autos from a series of steps. The goal of note map was to explore creating dynamic autos quickly without the need of tuning handmade paths.
Motivations
Our motivations for this project started right after the season ended. We drew inspiration from Team 1690 who talked about their note tracking methods in their virtual software presentations. We wanted to implement a similar approach, but with more automation. Eliminating the need to tune paths would enable us to create custom autos quickly, with minimal drawbacks.
Note Detection
One of the core features of our note tracking is being able to track multiple notes at once. Using a Limelight 3 with a Google Coral, we:
- Get the corners of all the notes we see
- Convert the corners into x and y angles of each note
- Calculate field relative pose from those angles
- Remove notes that are outside of the field or vertical against the wall
Note Map is able to add notes to memory or update the position of known notes. Each note has a customizable expiry with a default of 10 seconds to get rid of stale data.
Because Note Map remembers notes it’s seen, we needed a way to detect when a note is stolen. If the robot thinks it should be on a note but doesn’t detect a note in the robot, we have timeouts that enable the robot to continue onto the next step after a tuned number of seconds. While this was robust, this took a significant amount of time to redirect the robot after a note was stolen. To address this, we continuously checked if remembered notes were within the field of view of the Limelight. If we expected to see a note but didn’t, it would be automatically removed. In theory, this lets us switch to the next note on the midline faster.
Unfortunately, it was difficult to make this reliable without sacrificing drive speed, which was a problem when making autos that race to the midline. The risk of a false positive was too high and we were out of tuning time, so this solution ended up being cut from Note Map.
Pathfinding
Since our goal was to make fully dynamic autos that required no pre-made paths, we realized we needed a way to avoid collisions with the stage while driving. Our initial solution was to use PathPlanner’s pathfinding functionality which created paths on-the-fly to avoid obstacles.
While PathPlanner’s pathfinding functionality was very robust, integrating it into our state-machine based code was a challenge since it’s so coupled with WPILib commands. We ended up creating our own pathfinding solution, which was to drive to our destination but divert to a safe point first if a collision was detected with the stage. Once the robot detected there was no danger of a collision, the robot would drive directly to the destination.
DSL For Defining Steps
Note map executes a list of steps consisting of actions and note IDs. Actions consist of scoring a note in the speaker or dropping a note in front of the robot. We reference the preset auto notes with numerical IDs 1-8. Notes that we drop can be later referenced starting with ID 10, increasing after every note that gets dropped.
Example 1:
steps.add(NoteMapStep.score(4));
steps.add(NoteMapStep.score(5, 6));
Explanation:
The first step grabs the amp side note on the midline (4) and then scores it into the speaker. The next step will shoot the next note it picks up, but specifies that if note 5 isn’t there, try getting note 6.
Example 2:
steps.add(NoteMapStep.drop(4));
steps.add(NoteMapStep.drop(5));
steps.add(NoteMapStep.score(10));
steps.add(NoteMapStep.score(11));
Explanation:
The first step grabs the amp side note on the midline (4) and then drops it in front of the robot. Then, the next step grabs the next midline note (5) and then drops it as well. The third step grabs the note we dropped in the first step (now 10) and scores it in the speaker. Finally, the last step grabs the note we dropped in the second step (11) and scores it as well.
Examples From Madtown Throwdown
Source Side Red
Amp Side Race Blue
Conclusion
Over the course of seven months, we went through a lot of iteration, and even between Chezy Champs and Madtown Throwdown, we completely rewrote Note Map to make it more reliable and easier to debug. We learned that if we’re going to undertake a project like this, we should approach it with the expectation of being time consuming and requiring significant development. However, it was exciting when we first saw our project work at Madtown since we spent so much time working through different solutions. We learned a lot about pathfinding, game piece detection, and implementing complex automation, which we’ll be able to transfer to future seasons.
Resources
https://github.com/team581/2024-offseason-comp/blob/main/src/main/java/frc/robot/note_map_manager/NoteMapManager.java
Interpolated Vision
This was written by Owen, one of our software students.
This offseason, one of our goals was to improve the accuracy of our localization. We solved this with something we call Interpolated Vision, which works by transforming the MegaTag 2 pose from the Limelight by vision poses mapped to on-field measured points. During field calibration, we place the robot in several known positions on the field (aka “measured pose”) and record the pose output from the Limelight (aka “vision pose”). These data points are stored in code and used to create a mapping from raw Limelight pose to measured field pose.
Field diagram to help us with our Calibration
During matches, we compare each pose from the Limelight to the stored mappings and calculate a weight against each “vision pose”. The weights are calculated based on how close each “vision pose” is to the Limelight’s output pose. Using the weights as a scalar, we apply the mappings of the calibrated “vision pose” to “measured pose”, which results in a more accurate output pose. We used Interpolated Vision at all of our offseason events with Titan and Snoopy and we were very satisfied with the improved reliability of our localization.
Interpolated Vision used in Q66 at Chezy Champs, green is raw pose from Limelight, red is interpolated pose
Resources
https://github.com/team581/2024-offseason-comp/blob/main/src/main/java/frc/robot/vision/interpolation/InterpolationUtil.java
Physics Based Shooting
This was written by Hector, one of our software students.
One goal for this robot was for it to make shooting more accurate and precise without consuming too much time manually tuning shots. The feature we worked on for this was model based shooting in Python. This feature estimated shot angles at different distances by using kinematic math including gravity, aerodynamic drag, and shooter efficiency to generate every possible shot from a set of input distances. It searched through over 600 trajectories for the angle that would hit closest to the target. Then, the script generated an angle lookup table in Java from the results for the robot to consume. The table values would be used by the robot by lerping the distance to angle while doing a vision shot. Specifically, this started as mathematically calculating note trajectories with two main variables: launch angle and velocity. Although it seemed promising, our approach at making it was refined throughout its development. Part of our initial approach was kept, for example, simulating and calculating trajectories with physics math; but, what changed with the logical implementation was that we would individually evaluate each trajectory and its distance from the target point, as opposed to having an equation where the angle was solved for. In the end, this was not entirely reliable to completely use in matches from all distances.The reason for this is that it was mostly precise from mid-range, but usually at longer ranges it could not properly find a correct trajectory. This could have been worked on by constraining the shots more, as launch velocity generation was ultimately taken out of the code due to our limited time. We partly used the generated outputs we got from the script, and manually confirmed and tuned that the shots would work. If given more time, we would have solved for launch velocity and improved our confidence to have this feature be fully relied on by the robot.
Diagram for the physical trajectory of each shot certain distances in meters away from the speaker. The yellow dot is the target, and the red lines show the walls of the speaker.
Resources
https://github.com/team581/2024-offseason-comp/blob/main/src/main/python/modeling/used_classes.py
Feel free to ask any questions and we will answer them!