Team 581 Blazing Bulldogs 2024 Offseason Robots & Projects

Through the offseason the team has been hard at work building new robots and working on training & new projects. Team members will be making posts to detail and share what we learned!

The first project we are proud to share is our 2024 New Member Robot, SLYNKy! This post was primarily written by new members with guidance from returning member Emi.

2024 NEW MEMBOT

The goal of the new member robot project is to give us new members the chance to create our own robot and experience competition roles first hand. We had the opportunity to compete with this robot at the MadTown Throwdown off-season competition to showcase the work we put in. The new members are proud to release the CAD and code for the 2024 New Member Robot, Slynky.

Cad link

Code link

This robot was based on the 9496 Lynk 2024 robot, which we paid homage to with the name S(lynk)y. This year, the primary focus for new members was to develop a solid foundation in:

  • Assembly work
  • Electrical work

9496’s simplicity gave the new members an approachable, well-rounded introduction into these practices while providing a very competitive base.

Across five weeks, new members worked to understand CAD assemblies and create wire routes across the robot. Students on the assembly team developed such skills as operating machines, understanding what makes a working roller, and the different types and uses of fasteners. New members on the electrical team learned to crimp connectors and route wire while learning how the control system worked as a whole.

New members also created a robot checklist for the pit crew to use at the event as well as a one pager showcasing the robot for the competition.

The software for this robot was written by a returning member as there were no new software members. We had less than a week of testing and tuning time with this robot before the event and used the time to optimize teleop behavior. The robot used a Limelight 3G to get its pose on the field, which we used for auto shooting into the speaker and feeding from any location.

9987 feeding in the bottom left

We drafted a 1 piece auto and a 4 piece auto, but did not have a chance to run them before the event. We iterated on the autos frequently on the practice field - by the end of Madtown, we were able to score 3 out of 4 pieces from the front.

|807.2561728395062x426.56846623074324
^ Auto path for the 4 piece auto, in Pathplanner.

40 Likes

Sick robot, and an amazing run to watch at Madtown Throwdown.
i know I was personally shocked to be watching MTTD and seeing our robot running around

15 Likes

Team 581 2024 Offseason Software Projects

This offseason we’ve been hard at work on software projects to improve the reliability of our robot. These are 3 projects that we feel proud of and learned a lot from.

Note Map

This was written by me, one of our software students.

Overview

Note Map is a domain specific language (DSL) that we made to create fully dynamic autos from a series of steps. The goal of note map was to explore creating dynamic autos quickly without the need of tuning handmade paths.

Motivations

Our motivations for this project started right after the season ended. We drew inspiration from Team 1690 who talked about their note tracking methods in their virtual software presentations. We wanted to implement a similar approach, but with more automation. Eliminating the need to tune paths would enable us to create custom autos quickly, with minimal drawbacks.

Note Detection

One of the core features of our note tracking is being able to track multiple notes at once. Using a Limelight 3 with a Google Coral, we:

  • Get the corners of all the notes we see
  • Convert the corners into x and y angles of each note
  • Calculate field relative pose from those angles
  • Remove notes that are outside of the field or vertical against the wall

Note Map is able to add notes to memory or update the position of known notes. Each note has a customizable expiry with a default of 10 seconds to get rid of stale data.

Because Note Map remembers notes it’s seen, we needed a way to detect when a note is stolen. If the robot thinks it should be on a note but doesn’t detect a note in the robot, we have timeouts that enable the robot to continue onto the next step after a tuned number of seconds. While this was robust, this took a significant amount of time to redirect the robot after a note was stolen. To address this, we continuously checked if remembered notes were within the field of view of the Limelight. If we expected to see a note but didn’t, it would be automatically removed. In theory, this lets us switch to the next note on the midline faster.

Unfortunately, it was difficult to make this reliable without sacrificing drive speed, which was a problem when making autos that race to the midline. The risk of a false positive was too high and we were out of tuning time, so this solution ended up being cut from Note Map.

Pathfinding

Since our goal was to make fully dynamic autos that required no pre-made paths, we realized we needed a way to avoid collisions with the stage while driving. Our initial solution was to use PathPlanner’s pathfinding functionality which created paths on-the-fly to avoid obstacles.

While PathPlanner’s pathfinding functionality was very robust, integrating it into our state-machine based code was a challenge since it’s so coupled with WPILib commands. We ended up creating our own pathfinding solution, which was to drive to our destination but divert to a safe point first if a collision was detected with the stage. Once the robot detected there was no danger of a collision, the robot would drive directly to the destination.

DSL For Defining Steps

Note map executes a list of steps consisting of actions and note IDs. Actions consist of scoring a note in the speaker or dropping a note in front of the robot. We reference the preset auto notes with numerical IDs 1-8. Notes that we drop can be later referenced starting with ID 10, increasing after every note that gets dropped.

Example 1:

steps.add(NoteMapStep.score(4));
steps.add(NoteMapStep.score(5, 6));

Explanation:

The first step grabs the amp side note on the midline (4) and then scores it into the speaker. The next step will shoot the next note it picks up, but specifies that if note 5 isn’t there, try getting note 6.

Example 2:

steps.add(NoteMapStep.drop(4));
steps.add(NoteMapStep.drop(5));
steps.add(NoteMapStep.score(10));
steps.add(NoteMapStep.score(11));

Explanation:

The first step grabs the amp side note on the midline (4) and then drops it in front of the robot. Then, the next step grabs the next midline note (5) and then drops it as well. The third step grabs the note we dropped in the first step (now 10) and scores it in the speaker. Finally, the last step grabs the note we dropped in the second step (11) and scores it as well.

Examples From Madtown Throwdown

Source Side Red

Red Source 3 Video

Amp Side Race Blue

Blue Amp Race Video

Conclusion

Over the course of seven months, we went through a lot of iteration, and even between Chezy Champs and Madtown Throwdown, we completely rewrote Note Map to make it more reliable and easier to debug. We learned that if we’re going to undertake a project like this, we should approach it with the expectation of being time consuming and requiring significant development. However, it was exciting when we first saw our project work at Madtown since we spent so much time working through different solutions. We learned a lot about pathfinding, game piece detection, and implementing complex automation, which we’ll be able to transfer to future seasons.

Resources

https://github.com/team581/2024-offseason-comp/blob/main/src/main/java/frc/robot/note_map_manager/NoteMapManager.java

Interpolated Vision

This was written by Owen, one of our software students.

This offseason, one of our goals was to improve the accuracy of our localization. We solved this with something we call Interpolated Vision, which works by transforming the MegaTag 2 pose from the Limelight by vision poses mapped to on-field measured points. During field calibration, we place the robot in several known positions on the field (aka “measured pose”) and record the pose output from the Limelight (aka “vision pose”). These data points are stored in code and used to create a mapping from raw Limelight pose to measured field pose.

Field diagram to help us with our Calibration

During matches, we compare each pose from the Limelight to the stored mappings and calculate a weight against each “vision pose”. The weights are calculated based on how close each “vision pose” is to the Limelight’s output pose. Using the weights as a scalar, we apply the mappings of the calibrated “vision pose” to “measured pose”, which results in a more accurate output pose. We used Interpolated Vision at all of our offseason events with Titan and Snoopy and we were very satisfied with the improved reliability of our localization.

Interpolated Vision Demo

Interpolated Vision used in Q66 at Chezy Champs, green is raw pose from Limelight, red is interpolated pose

Resources

https://github.com/team581/2024-offseason-comp/blob/main/src/main/java/frc/robot/vision/interpolation/InterpolationUtil.java

Physics Based Shooting

This was written by Hector, one of our software students.

One goal for this robot was for it to make shooting more accurate and precise without consuming too much time manually tuning shots. The feature we worked on for this was model based shooting in Python. This feature estimated shot angles at different distances by using kinematic math including gravity, aerodynamic drag, and shooter efficiency to generate every possible shot from a set of input distances. It searched through over 600 trajectories for the angle that would hit closest to the target. Then, the script generated an angle lookup table in Java from the results for the robot to consume. The table values would be used by the robot by lerping the distance to angle while doing a vision shot. Specifically, this started as mathematically calculating note trajectories with two main variables: launch angle and velocity. Although it seemed promising, our approach at making it was refined throughout its development. Part of our initial approach was kept, for example, simulating and calculating trajectories with physics math; but, what changed with the logical implementation was that we would individually evaluate each trajectory and its distance from the target point, as opposed to having an equation where the angle was solved for. In the end, this was not entirely reliable to completely use in matches from all distances.The reason for this is that it was mostly precise from mid-range, but usually at longer ranges it could not properly find a correct trajectory. This could have been worked on by constraining the shots more, as launch velocity generation was ultimately taken out of the code due to our limited time. We partly used the generated outputs we got from the script, and manually confirmed and tuned that the shots would work. If given more time, we would have solved for launch velocity and improved our confidence to have this feature be fully relied on by the robot.

Diagram for the physical trajectory of each shot certain distances in meters away from the speaker. The yellow dot is the target, and the red lines show the walls of the speaker.

Resources

https://github.com/team581/2024-offseason-comp/blob/main/src/main/python/modeling/used_classes.py

Feel free to ask any questions and we will answer them!

33 Likes

2024 Offseason Bot

This post was primarily written by design students Brian and Emi and software students Ryan and Fernanda.

Crescendo is the third year the returning members of our team have created an off-season robot. This project exists to hone our skills and gain experience trying new things. We are proud to release the CAD and code for our 2024 off-season robot: Snoopy.

Onshape 2024 Offseason Bot

GitHub Repo 2024 Offseason Bot

Marketing logos/Robot name

A tradition on our team is to name our robots after dogs. Team 2056 OP Robotics, and their 2024 robot was a source of inspiration for this project and can be found in the name of our robot, Sno(op)y. We also made a logo for this robot which features the 2056 gear and maple leaf combined with a blazing bulldog.

Design Decisions:

We wanted to design and build archetypes that we did not focus on in-season to enable returning students to train and familiarize themselves with a variety of different robots. This year we based the robot on team 2056 and studied other teams like 6329, 1591, and 294 all of whom fit within the same archetype. This allowed us to concentrate on a mechanically simple and well-rounded robot that easily adapted to our existing chassis & intake, which let us focus on designing new subassemblies. While we used 2056’s already proven pivot geometry, we made their robot our own by designing it to suit our design style and re-use parts we had in stock: Snoopy’s shooter uses a single-sided belt instead of a double-sided belt, 2 large plates instead of multiple, our comp bot flywheels instead of stealth wheels, and flex wheels instead of sushi rollers for the feeder. We also climbed using two hooks on our shooter arm, similar to Team 294, instead of integrating a pneumatic system into the robot.

Software:

General Robot

All of our subsystems have different states that exhibit different behaviors that we put together with a state machine. In our robot manager, we assign the robot state, and depending on that we can set different behaviors for each of our subsystems. For example, if the robot state is WAIT_FOR_AMPING, the robot manager sets the arm to go up, the swerve to snap to the amp, and the shooter to warm up all at once, with each subsystem executing the underlying behavior. This pattern made writing, debugging, and implementing features and changes easier. An example of this was during the MadTown Throwdown, when it only took us the time between one qualification match to fully implement another state that pushed the arm up against the hard stop at a low voltage to prevent the note from flying out of the robot during the intake-to-queuer handoff.

Google Keep

9988 playing cleanup in the blue amp area

Autos & Vision

We used Choreo for path creation and PathPlanner to follow autos. We collaborated with our strategy team to construct two autos we thought would be competitive in our limited time.

4-piece amp side auto that resembled 2056’s in-season routine:

4-piece source side auto that dropped the preload and circled back to intake and shoot it at the end:

We used two Limelight 3Gs for localization and generated Interpolated Poses to more accurately shoot in our Amp area. The two main vision projects for this robot were Interpolated Vision and Intake Assist. Wel talk more into detail about Interpolated Vision in our software write up post. Our other project, intake assist, was used during autos and teleop to assist with the intaking of nearby notes. It detected notes using the Limelight 3 with a Google Coral and adjusted our robot translation vector within -35°/+35°of the initial vector.

AD_4nXcHaLpTy2WtzlSuc3DeBDNclnC6sn3mDqbe_m96p_4-pUdAx19mum8oAtHqkVDXhY6VzpjBzFkib8tBOSFXU5N3P4bv8ZdGZj8LRSbgPaHvrkWH254Iv9CR

9988 Running 4 piece amp side with intake assist

Pictures of us making the robot


Feel free to ask any questions and we will answer them!

15 Likes

This is a really interesting problem because the field definitely does not match the limelights idealized tag locations ever, and the datums the field is built from (centerpoint & chalk lines & “how the alliance station fits the wall when it’s stood up”) are not the same datum that the limelight/wpi field pose measurements are based on.

I’d be really interested in a visualization of how the interpolated vision correction “smears” the LL outputs towards their real physical locations. Maybe taking a field image and then shifting the pixels? It’s hard for me to conceptualize where the reference 0 “should” be for that transformation.

Snoopy cooked, it was an honor to play with y’all at madtown!

5 Likes

Here’s a side-by-side of our auto in Q30 of Madtown! The top shows our log from that match (orange=note map, red = robot), and the bottom is the video of that auto.

Q30 auto demo

8 Likes

This is cool.

2 Likes