FRC 2713 - Red Hawk Robotics - 2023 Build Thread

Welcome to team 2713’s 2023 build thread.

Team 2713, Red Hawk Robotics, is based out of Melrose High School in Melrose, MA. We have approximately 35-40 students and approximately 15 mentors. For machine access, we have a manual lathe, two drill presses, a band saw, a chop saw, a manual mill, several Ender 3 Pros (mostly stock config), and an X-Carve CNC (polycarb/wood only). We are lucky enough to have an incredible sheet metal sponsor, Churchill Corporation, that provides a huge amount of machining resources for our team.

During the offseason, we rebranded to Red Hawk Robotics to follow our school’s rebrand. We are still migrating some of our socials over, as a rebrand takes a lot of effort and things are not quite finalized.

We’ve had a very busy offseason:

  • Competed at 5 offseasons, playing a total of 43 additional matches
  • Completely reorganized the entire shop
  • Recycled multiple truckloads of old tools and materials
  • Dramatically increased fundraising efforts - our budget has so far doubled with room still to grow
  • Gave the pit a much needed facelift
  • Built a large custom pit cart for small parts organization
  • Completely overhauled our codebase structure in order to better support simulation, logging, and swerve.

We’ll have a couple posts about the pit cart and the software overhaul in the coming weeks.

We are cautiously optimistic about the upcoming season, where our goal is to qualify & attend the world championship in Houston, something the team has not done before. As such, our robot design and strategy will be focused around this.

We’ll be competing weeks 0, 1, and 4, at Week Zero, NE Southeast Mass, and NE Greater Boston.


Offseason Software Work

During the 2022 season, we achieved a huge amount of growth in our software department:

  • Reliably scoring auto routines
  • 4-5 ball auto routines (which sometimes even worked)
  • Integrating CV for the first time
  • Much more frequent usage of closed loop controls
  • Usage of WPILib Trajectory utilities

We were very proud of our growth, but we still ran into pain points and knew we had a lot of room to grow. We specifically had some frustrating moments regarding logging - there were a few matches in our second district event where our auto would seemingly randomly veer wildly off to one side. Yet during teleop, it seemed to drive totally fine. We figured it was the gyro - we had used the gyro we had found in the shop, and didn’t spend much time verifying it was in working order. We swapped it out - and still had the same problem. It wouldn’t happen every match, but it happened in a few, and it really hurt our confidence in our autos going into playoffs - I’m sure it didn’t inspire confidence from alliance captains, either. We found the issue during drive practice before DCMP - one of our Anderson crimps on a drive motor wire was a bit loose. We replaced it with the inline Wagos and the problem went away.

All that is to say - logging would have found the problem rather quickly, probably after just one occurrence of it happening.

We are very lucky to be geographically close to, and on great terms with, team 6328. They have consistently inspired us and helped us take our software one step further, and we’d like to take a second to thank them. We were extremely impressed by the capabilities of their logging and software stack, so we copied it. Oh, and we also did swerve.

Swerve Code

We are proud to release the code for our offseason swerve: GitHub - FRC2713/Robot2022-v2

This software has been tested on MK4i modules, at L3 ratio with NEOs for steering and driving, using Thrifty Encoders connected directly to the roboRIO.


  • I/O Layers have been fully integrated into working code for the first time! As a result, it should be very simple for us in future to switch between different types of hardware and simulation in code.

    • I/O layers were developed by 6328 - they have tutorial information on how they work here!
  • We’ve also finally begun simulating our robot using Glass and AdvantageScope (thanks again, 6328!). It took some effort to integrate simulation with the I/O, but it works! Now we can dedicate more time to building, since we can hope to catch more software errors by testing using the sim as well as 6328’s integrated logging tools.

  • We began using an enum-based approach to identifying SwerveModules to allow us to change the PID Controller of just one swerve module on the fly; if, say, the mechanical resistance of one module increases, gains for just that module could be increased while leaving the rest untouched.

  • We started automatically logging RevLibErrors into AdvantageScope to show anyone analyzing where and when these errors occurred, by including some stack trace. This is just one of the many things logging can help with.

  • We started using PathPlanner for our autonomous routines. We attempted to use it last year, but found that the robot wouldn’t obey our speed limits, which was, of course, a safety issue. We instead directly generated trajectories using the poses of significant field objects. This was a pain, particularly because it made changing paths very difficult (since we had to translate the points relative to their rotation - which was different depending on the object - rather than the field). Thanks to some recent changes to their API, however, the robot is consistently working with our constraints again, so we’ll be using it this year.

  • An enum-based approach to managing paths from PathPlanner was implemented which simplifies the task of switching between paths and getting their trajectories.

  • We created a motion handling class this year! All of its methods take in no parameters, and return a set of SwerveModuleStates for the robot to follow on that command loop. The swerve subsystem runs methods from the motion handler every loop based on an enum, which can then be controlled in other classes, for example using controller outputs. The current list of motion modes are as follows:

    • Trajectory - used whenever following a trajectory. It uses a TrajectoryController class we made, which can be set to particular trajectories, and which then samples the trajectory’s target pose every loop and returns the necessary ChassisSpeeds. The motion handler then converts the ChassisSpeeds into SwerveModuleStates, and returns them.
    • Full Drive - this is your standard swerve driving. Takes in the driver’s inputs, converts them into ChassisSpeeds, converts those into SwerveModuleStates, and returns them.
    • Heading Controller - Standard swerve driving, but with a twist; instead of controlling the speed of rotation directly, the driver controls the setpoint of a PID controller, which then calculates the speed of rotation.
    • Lockdown - this one’s pretty simple - just hardcodes an array of SwerveModuleStates which will stop the robot and put the wheels in a configuration which will make them difficult to push. Probably useful for defense.
    • It’s very easy to add motionModes using this model - just add an option to the enum, add a corresponding method which returns SwerveModuleStates, and change the switch case in the chassis subsystem to implement it.
  • We’ve also ventured into documentation for the first time - hopefully some of what we’ve written is a little clearer now, but we’re still open to any questions (and bugs) anyone comes across.

We’re not done yet, though - assuming we decide to continue using swerve during the main season, here are a few other things we want to look at:

  • Pathing on the fly, for example by pressing a single button to bring the robot to some game-relevant location
  • Updating odometry using CV, and just generally increasing its accuracy
  • Working out the kinks with event markers in PathPlanner, and potentially using these in our auto commands
  • Putting I/O layers on other subsystems
  • Expanding usage of the MotionHandler with game-relevant controls (maybe implementing the above-mentioned improvisatory pathing)
  • Adding Robot.goSlow() for increased precision

I’ve been pretty fixated on differentials for the past few weeks, so when the game was revealed, we tried very hard to make a differential elevator work for the game and the team. Unfortunately, it didn’t work out for us, but maybe it’ll work out for you:


Code: GitHub - FRC2713/Robot2023

CAD: Onshape

We’ll have more posts coming tomorrow (cart) and Sunday (week 1).


General Requirements Regarding Pit + Shop Improvements:

After finishing our competition season, we realized that, in order to actualize our robot-centric goals, we must also make pit and shop improvements. With this in mind, we outlined a few requirements:

  • Moveable Pit: Not only from truck-to-venue-to-truck, but also easily moved during DCMP/CMP playoffs.
    • Additional benefit if NE FIRST keeps CMP pit + robot “shipping service.”
  • Integrated Pit + Shop: Team members becoming accustomed to working out of the pit during the season will improve efficiency during competition.
    • Removes need to pack and unpack prior to, and post competition, respectively.
    • Lower risk of “forgetting to pack” any items; can prove detrimental with non-COTS items.
  • Better Organized Small Parts + Tool Storage: To optimize pit space, we decided on using two carts: a COTS tool cart (linked here), and a custom and/or small parts storage cart (widely influenced by this video), which will be the focus of this post.
    • Prioritizing single-sided storage containers over double-sided storage containers.
    • Custom cart gives freedom of choice with regard to which storage containers we use (w/ the additional understanding that the game, and with it the parts we use, will continue to change every year).
    • Storage that’s user friendly; NOT:
      • Bins stacked directly on top of each other.
      • Boxes inside of boxes, inside of boxes (or, in our case, FIRST-provided totes).
      • Reliance on event-provided tables.
      • Tabletop workspaces becoming a method of storage; i.e. they are not available as workspaces.

Custom Cart:

The discussion soon split between two options: a separate Custom Cart and Battery Cart, and a Custom Cart with batteries. Although we have seen teams integrate battery storage into COTS carts, we felt this could be a safety hazard, i.e. highschool students pushing a 750 lbs cart up a ramp.

We chose this Harbor Freight (HF) bin for storing small parts. It contains variably sized inner containers, and is cheaper and more widely available than other alternatives. Since the HF bins aren’t deep enough to hold larger parts (such as COTS swerve module parts…), we decided to include different sizes of sterilite bin in the Custom Cart (links: 6 qt, 15 qt, 27 qt) (thanks to 2363 for the recommendations). We chose to not use the 27 qt sterilite bins because we felt they were not as space efficient as the 6 qt and 15 qt bins.

We then designed 6 layout variations for the cart in CAD (see v.4 for final version).

Layouts with Integrated Battery & Charger Storage

HF Bins + 6 qt Sterilite Bins

HF Bins + 15 qt Sterilite Bins

HF Bins + 6 qt / 15 qt Sterilite Bins

Layouts without Integrated Battery Storage & Chargers

HF Bins + 6 qt Sterilite Bins

HF Bins + 15 qt Sterilite Bins

HF Bins + 6 qt / 15 qt Sterilite Bins

Final Design with Removable & Standardized Battery Bays:

HF Bins + 6 qt / 15 qt Sterilite Bins:

HF, 6 qt, + 15 qt sterilite bins: (FINAL DESIGN); 2x1 Aluminum L Bracket Shelves

Front view, “pit/shop mode”

Other Perspectives

Isometric view, “transport mode”

Back view, “pit/shop mode”

To maximize storage efficiency and the longevity of the cart, we chose the version with batteries, HF bins, 6 qt, and 15 qt Sterilite bins. We used this design as a base, keeping small parts storage the same, and changing battery storage. We made several changes to the cart’s design – notably, standardizing not only the Battery Bays (i.e. all removable, all the same size, etc) but also the shelves for the 6 qt sterilite bins (see above). We removed the right-most Battery Bay design; the design became simpler and we only substituted a 15 qt sterilite bin for a 6 qt bin. The design’s specifications are as follows:

We prioritized the Battery Bays being removable. This ensures easier transportation to non-competition events (i.e., bringing batteries to drive practice, a demo, etc.) as well as simplifying the overall structure – the CAD model and the actual building process – of the cart. There are two versions of the cart with removable Battery Bays; we chose the second version (see above) because of its simplicity. We used dado joints to help further stabilize the cart, especially since it was horizontally divided by a single piece of plywood. Instead of wooden shelves, we chose to use 2x1 (x18”) aluminum L brackets provided by our sheet-metal sponsor, Churchill Corp. This change will simplify the design and allow us to fit more storage containers. We allocated space behind the Battery Bays for a power strip, and used this mounted outlet to easily connect the power strip to external power at competition and in our lab, and avoid dealing with a cable going through external panels.

Building the Cart:

After finalizing the cart’s design in CAD, we began the process of manufacturing it. We are extremely grateful to Boulter Plywood, a local plywood supplier, for providing us with high-quality Baltic Birch plywood at a discounted price. Additionally, we’re grateful to our school’s tech-ed teacher for lending both his expertise and his classroom’s woodworking shop to this project. We used a table saw to cut individual pieces of plywood, and started routing the smaller dado joints, using this Harbor Freight router. All dados are ¼” deep and ¾” wide. We made and re-used the same router jig (pictures below) for all smaller pieces, adjusting the dimensions accordingly. Larger dados — or those on larger pieces (i.e., the top of the cart) — were cut using a mentor’s table saw with a dado-blade. On the sides of the Battery Bay, we drilled two circular holes, using a Jigsaw to connect them to create handles.


Router jig for dado joints

Transporting plywood from the tech-ed shop to our shop

We then began constructing both the Battery Bays and cart without wood-glue, as a test fit. The Battery Bays are all completely “screwed-and-glued,” but we want to test the functionality of the cart before disassembling and glue-ing. We installed all of the shelves in-place, using different small jigs for spacing. We plan to continue working on the cart during build-season — hopefully ready for use by our first District Event, SE Mass in Week 1. We will update the thread with pictures and progress.



Recap Week 1 of 2023 Build Season


An Update

We’ve been slacking, and we apologize. We’re a rapidly changing team that is currently maxing out our bandwidth on the robot.



We settled on swerve rather quickly, on day 2 of build. We looked at the field and the only reason that we shouldn’t do swerve that we could find was the cable protector, which we don’t believe will be detrimental to it. Additionally with how cramped the line-up process is in the community, on top of the cramping in the human player station area, we settled on swerve, with the MK4i modules we already had on hand.

We evaluated the two module configs for the MK4i - standard and high-clearance. Due to the way we’re mounting the elevator above the chassis, we aren’t able to lift the modules up and out of the chassis, as the elevator prevents it. Because of this, we opted for the high-clearance config, which allows us to drop the modules out the bottom of the chassis without interference.

We wanted to go as small as possible without jeopardizing our center of gravity - 24" or 25" felt a bit too small due to a likely higher CG this year than last, and 26" square felt “just right.” This is also the maximum size you can be if you want to fit 3 robots on the bridge, but with swerve you can simply strafe your bumpers off the edge for the couple extra inches you’ll need to make sure robots aren’t accidentally pushing against each other while trying to balance.

Pictured are a simple top guard (mostly to prevent grease from flying all over), a Spectrum Corner™, and some crush blocks to help us not crush the tube while we’re fastening the modules into place. These are all PLA+ parts. We’re running the L3 on NEOs.

The higher config does raise the belly pan, and thus the CG, though - we are already concerned about CG due to everything above the drivetrain, so we want to keep the CG as low as possible. Due to this and our sheet metal sponsor’s availability, we created the belly pot.

The belly pot rivets to the bottom of our drive rails, and lowers everything on it (battery, PDH, sparks, etc) by an additional 1.75 inches. This is currently a single piece of aluminum, but it’s not difficult to transform this into a 5-piece set of 4x aluminum brackets and 1x stainless steel belly pan if we need an even lower CG (and have the weight for it). Always have a backup plan.

We’ve laid out the electronics in such a way to minimize cable runs as best we can. We are using the VRM to power both our radio and our network switch, rather than using the REV Radio Power Module plus a Polulu regulator. The VRM is just easier for our use case. Hopefully the radio has more ports in the future. Pictured sparks are for our swerve modules plus our elevator.


We’ve decided on a continuous (3/16” Dyneema) rigged elevator powered by 2 NEOs at ~5:1. The elevator is mounted at a 55 degree angle. The mounting for this is a rather difficult packaging problem, which is why we are using the Spectrum Corners to allow us to rivet plates to the outside of the drive rails.

We’re using some additional 2x1 to brace the upper parts of the elevator. As of now we plan on routing the wires connecting to the carriage through these 2x1’s. It’s possible we’ll mount a vision camera up here as well, but that’s still to-be-finalized, and may wind up near the belly pan. The rigging here goes both inside and outside the tubes, and is very similar to 1323 2019. We also opted for using 1x1 tubing as opposed to 2x1 for the top and bottom of our carriage. This will allow for our tensioning system (which will be mounted on the bottom of our carriage) to better align with our driving pulleys and prevent the Dyneema from going at too steep of an angle and potentially slipping.


Mounted to the elevator carriage is our four-bar system, which is responsible for allowing the end effector to get where it needs to be. We’ve intentionally left the four-bar-to-intake-mounting a bit abstract as we are still experimenting with intake geometry.

The sparks here are for powering both the four-bar and the end-effector. The four-bar is powered (currently) by a single NEO on a 250:1 reduction - a 100:1 planetary reduction, which powers a 16:40 RT25 belt reduction. This belt is mounted to both the four bar arms and a MAX Spline shaft, in order to power both sides on one motor. We’ve left open the possibility of adding a second motor to power the other side of the four bar, which is something we did last year and wound up saving us due to some tolerance & controls issues.

We are prioritizing modular design with the carriage-extension. Since the 4-bar/end-effector is the only part of our robot consistently extending outside of our frame-perimeter, we believe they are likely to get damaged during matches. Modularity allows us to isolate issues, replacing individual parts rather than the entire system. Last year, we ran into this issue a lot (especially with our climber) — short turn-arounds spent removing, fixing, and replacing a whole part of our robot. We hope modular design will help to prevent this.


We currently have two intakes that we are actively CADing and will manufacture. We don’t have enough signal to definitively say one is better than the other, and we have the bandwidth to design & build both, so we are going to design & build both.


On the left side, you’ll see a NEO 550 on a 45:1 reduction, with an output shaft that has both a pulley and a gear.

The pulley on the MAX Planetary output runs directly to the upper red roller. The gear runs to a separate axle, which is coaxial with another pulley, which connects to the lower red roller. This is in order to have both rollers powered by a single motor, but with inverted directions. The belt runs are all 1:1.

On the right side, you’ll see another NEO 550 on another 45:1 reduction that powers the output shaft with the 4" flex wheels on it. This is a 1:1 belt run.

The red plates on the outside are belt guards to prevent the belts from ever walking off or getting hit by another robot (which plagued us last year).


The concept of Marvin is that those mid-level flex wheels are spring loaded inwards, and each half of the intake is each powered by a motor. It’s on the table to move this to a single motor powering it, but for now we’re using two. These 4" flex wheels are powered by a NEO 550 at a 2:1 reduction. We are still actively working on motor mounting and related geometry here.


We’ve taken a huge amount of inspiration from 6328 over the last year, and have fully converted to the AdvantageKit hype train. The majority of our work has been on learning how to most effectively utilize simulation. We are simulating each of our mechanisms right now:

Since our elevator is at an angle and does not behave exactly the same as a normal vertical elevator, we had to subclass ElevatorSim in order to adjust the gravitational constant, resulting in AngledElevatorSim.

Our swerve modules are simulated by modeling each module as two flywheels - one for azimuth, one for driving. It works, but the MOIs are mostly just guessed and will probably be wrong on the real robot and will need re-tuning.

We’re not taking too many risks this year, and are mostly catching up to the latest on recommended approaches for things like pose estimation with April Tags.


One thing we are taking risks with is our autonomous path generation methodology. Since this year’s field is not rotationally symmetric, we need to do some more explicit transforms in order to account for red vs blue alliance paths. Our requirements were:

  1. Only create/maintain paths on one side of the field (e.g. blue)
  2. Use a GUI-based path creation tool (PathPlanner)
  3. Minimize the amount of “red vs blue” compensation that needs to occur during April Tag usage.

The third point is a quality-of-life requirement and is not based on any performance or math related foundations. Since we are still learning a lot about coordinate spaces in robotics, we feel it would be easiest if we had an absolute origin - in this case being the bottom-right corner of the blue alliance driver station wall. We felt it’d be easiest to understand pose-based log data if we had the same origin in every match. This may not be the case for every team, and that’s okay.

PathPlanner provides a solution for requirement #1 and #2, but the built-in transformation of trajectories from blue to red also adjusts the origin, which we do not want (but we’re sure will be a totally valid solution for most teams).

We want to manually reflect the trajectories (and states) ourselves at robotInit, in order to transform them from blue to red. However, PathPlanner’s transformation algorithm uses private fields that we can’t access, so we can’t completely transform it on our own. There are two solutions:

  1. PR PathPlanner to make private fields not private.
  2. Use reflection.

We felt adventurous, so we chose #2. We’d like to thank @came20 for helping us with this.

Effectively, we are taking each state in the trajectory and just mirroring it across the mid line to a new pose. This also requires some rotation changes and some curvature changes - some of which are private fields that we need to tell the JVM to make accessible. We then join the states to a trajectory using a constructor that is also private.

This will only crash if the API signatures change, which won’t happen during an event, as we won’t be updating PathPlanner during events. This is not recommended to most teams. That said, it fits our requirements for this year’s codebase.


Mechanism2d is great and recommended. However, we were worried about code complexity with having each subsystem update its own Mechanism2d, which may or may not be dependent on other subsystems’ Mechanism2d (mostly just at initialization time). Since we are opting for public static subsystems rather than dependency injection this year, we wrote a quick “subsystem” that reads from all the other subsystems and updates the whole Mechanism2d drawing. This class is a work in progress and will change drastically over the course of the season.

(PID tuning to be perfected later)

Overall, we are cautiously optimistic at this time. We are happy to see great teams like 3847 coming to similar conclusions as us and would like to thank them for sharing so much.


Here’s the Week 2 Recap Video!


Here’s Week 3!


Week 5 Recap Video


Week 6 Recap!


Welcome back

Yeah… it’s been a while. We’ve fallen behind for a few reasons.

  1. We’re building the most capable, highest-potential robot in 2713s history.
  2. I have had multiple rather sad events occur in my personal life in a very short time frame and it has taken a lot out of me. Put your mental health before FIRST, folks.
  3. Our radio is on fire

Where even are we?

Okay, so, we’re built. Sort of. Not really. Kind of.

Our school is on February break this week and we’re locked out of our school, so we’ve been camping out at 125s space (thank you @Brandon_Holley and co.) for the week and practicing there.

Some problems

Problem 1 - Communication Complications

As mentioned earlier, we’re having serious comms issues. You can read more about it here. Severe (>100%?) packet loss

We swapped out our Rio to a different one and it seems better. It isn’t perfect, but… it’s better.

Problem 2 - Capstan Craze

For our elevator, we originally designed around having 2 threaded 3D-printed capstans, very similar to 118 2019 or 2471 2018. Our routing uses the following sketch to go up/down; we’re routing the pulleys through the tubes in a similar way to 1323 2019.

And our code here:

In our first tests, we quickly realized that our elevator was going back down to zero, but the encoder was off by an inch or two (and sometimes more!). It always wound up back at the bottom though, so we figured, hey, after we get back down to the bottom, we can just reset the encoder after some time, and that should mitigate it in the short term - so we did that.

We wound up finding that the incorrect encoder ticks were due to the rope - in our case, 2.5mm Dyneema - slipping on the capstans. We had printed our capstans out of PLA+, but had some nylon SLS ones printed by @JamesCH95 (thank you!) that we thought would be higher surface friction and reduce slipping. They did, but unfortunately not enough. We also had 125 graciously print some up in Onyx, but unfortunately the printer, well…

This brings us to ~Wednesday. About this time, we began doing heavier practice cycles. We noticed that the elevator tended to drift upwards and accumulate error over time - that is we would bring it up to a setpoint, bring it down, and mechanically it would be off zero by about ~½ to ¾ of an inch, but the encoder ticks read near zero. We would bring it up, then down again, and mechanically it was off by an additional ½ to ¾ of an inch, but encoder ticks still read near zero.

We ruled out rope slippage by bringing it down, drawing a line in Sharpie across the rope, and bringing it up and down - the sharpie lined up perfectly. We spent most of the day tensioning, re-tensioning, investigating logs… everything. We eventually accidentally over-tensioned and snapped our Dyneema, and had to re-route it, which took up another chunk of time. During this, we wound up replacing our threaded capstans with non-threaded drums, and filed threaded capstans under the “someone else can do it, but it ain’t us this year” folder.

Eventually, I realized I made an oopsie. Remember we were zeroing the encoder 2 seconds after we went to the bottom? Yeah, that was

  1. Still in the code, when I thought it wasn’t

  2. Not actually what it was even doing

What it was doing was zeroing as soon as our “set to height and wait” command ended, which ends when we are within tolerance of our setpoint - in this case, 1 inch. (It does continue to move, but the command ends.) At that point, it would zero the encoders - the PID would then think it was at setpoint, and stop moving. This took about ¼ to ½ of an inch to all occur, which explains the error accumulation!

We took that out and the elevator was pretty much instantly fixed. Ugh.

Problem 3 - Initialization Issues

Every few code initializations, one of our modules - a different one each time - will be horribly offset and drag on the carpet at the wrong angle. We asked around and found a few other teams facing this issue. Note that we are not using the 364 code - ours is entirely “custom” (a lot of wrapping around WPILib) and uses NEOs, Thrifty Encoders, and the Pigeon 2.0. The teams we’ve spoken to believe it’s a CAN utilization issue - we’re thinking the message to tell the Spark to manually set the encoder position is somehow getting dropped. I have heard there is a CAN bus buffer on the Rio that is not that large and the message may be getting overflowed in that buffer if too many config messages are sent to the bus in a short time.

We tried reducing CAN utilization, and were able to drop from 70% to 55%, but unfortunately still faced the issue - admittedly a lot less. We then caved and simply set the initial position of the azimuth 10 times on boot, and that hasn’t failed yet.

Problem 4 - Tipping Troubles

We weighed in unofficially at ~89 lbs. We have a lot of room to play with. We reached out to a local metals vendor and got a cheap chunk of steel that we managed to fit quite snugly under our elevator and above our drive rails. We tapped some holes and bolted up through the drive rail into the steel block.

Drive Practice

We’ve been slowly ramping up our drive practice. Note that our drive train in these videos was running at ~80% top speed (we close the loop on wheel velocity, so our acceleration/voltage is normal, just lower top speed), our elevator speed has since been doubled, and our four bar speed has since been doubled. For much of the practice sessions, we were intentionally bringing the elevator up early to help the drive team learn how the robot handles while extended and be less afraid of early extensions.


We are still working out our QRScout config, but you can find a newly branded basic 2023 config here:


Speaking of branding - we’ve finally got a new logo to show off. We really love it.



Keep your chins up, we’re all rooting for you guys!


This robot is gonna tear it up this season. Really awesome to hear how you’ve battled through adversity so far and excited to see this thing compete eventually at Greater Boston!


Our Best Performance Ever

We competed at our week one event last weekend and was it a roller coaster. Load-in day and day 1 were plagued with small issues that caused big effects.

  • Poor strain releif on our connectors caused our collector to not work in multiple matches
  • A software issue had the robot driving in random directions in teleop, making control difficult
  • We got a game piece stuck in our robot for an entire match

Luckily, we stayed late on Day 1, rewired things, tuned software, and we were ready to go on Day 2.

Day 2 was much better. We had no electrical or software issues and we were able to see what our robot was really capable of. Here’s a graph showing the difference between day 1 and day 2. Total game piece points, not including link bonuses:

We were able to ride our newfoune performance all the way to the upper bracket finals, before we lost to the eventual finalists, and then to the eventual winners in the lower bracket. We’re still incredibly proud.


All in all, it was a fantastic weekend, and the end more than made up for the rought start.

Time to Get Better

Immediately after we woke up from our post-competition comas, we started work on a retrospective.

For each category, please add as many bullets you wish:

  1. What Went Well - Let’s celebrate what went well, and record it so we know to keep doing it.
  2. What Could Be Improved - This is where we get better for the next event. Recording what could be improved, and then working on those improvements is key to our success.
  3. Maintenance - These are things that we should do to prevent failures in the future (robot failures, process failures, etc)

The results of that retrospective are here: Copy of SE Mass Retrospective - Google Docs

We compete again in Week 4, and we have 3 main things we’re planning to improve on the robot:

  1. Consistency / Reliability - Software tuning, strain releif, and lots of practice
  2. Center of Mass - Replace our AL belly pan with a steel one, make the end effector as light as possible
  3. Ease of Use - Automate scoring, level selection, and game piece modes.

I love this robot so much. It’s obvious your ceiling is very high, can’t wait to see you guys play again.


Hi. It’s been a while. Our robot slaps now.


Hi all,

Our season went great. Some quick feats:

  • Week 1
    • District EI (first ever)
    • Ranked 8; 5 seed captain
    • Overcome a horrible day 1 (MY BAD)
      • We spent all of day 1 with terrible driving controls because our field-relative controls were using wheel odometry heading instead of gyro heading, so they would just become horribly offset not too long into the match
    • Made it to lower bracket finals
  • Week 4
    • Ranked 2; 2 seed captain
    • Overcame several sudden Spark failures in elims
      • spark.setInverted( ... ) failed when it hadn’t before, among other things
    • District Dean’s List (first ever)
    • Judges’ Award
  • DCMP
    • Rank 16; first pick of alliance 6
    • Overcame yet another massive ordeal
      • Battery got unstrapped in our first qual match and got thrown all over the belly pan
      • Damaged swerve analog encoder cables and bent a bunch of pins on the rio
    • Managed to be really consistent once we got driving again (which took a little bit)
    • Began to hit our 2-piece bump side auto near end of quals
    • District Champs WFFA (first ever)
    • Qualified for champs (first ever)
  • Daly
    • Rank 11; one in-pick from being a captain; 4th robot for 2 seed
    • Most consistent robot performance in the team’s history
    • Hit 6-8 teleop game pieces all 10 matches
    • Began hitting 2.5 game piece clean-side auto
    • Kept hitting 2 piece bump side auto
    • Got picked!
    • Got to see so many cool robots

This was a revolutionary season in 2713 history and one of the coolest I’ve been a part of. We’d like to thank so many people who have made this all possible. Thank you to the vendors that support this program and allow us to build cool stuff. Thank you to the following teams that either directly helped us or indirectly inspired us via prior robots:

95 401 2791
111 581 2910
118 900 3005
125 987 3467
133 1323 3476
157 1339 3538
190 1591 3847
223 1678 4481
254 2220 4909
319 2363 6328
340 2481 7407

You have all made a huge difference to the students. Three special shoutouts:

  • to 125 (@Brandon_Holley) for adopting us over February break
  • to 6328 (@came20 & @jonahb55) for the everything in software, basically; and helping prep for Dean’s List (@krbonner)
  • to 7407, for also helping prep for Dean’s List (@Dee).

We’d also like to extend a huge thank you to each and every one of our alliance partners this year – 6153, 5962, 230, 8708, 4909, 131, 2767, 1731, and 7769. You were all universally fantastic to work with and inspiring to our team. We have nothing but positive things to say about each of our experiences in playoffs this year.

You can catch us at BattleCry, Summer Heat, and probably several other NE offseasons. We’re very excited for the future.


LiveWorx ‘23 — What We’ve Been Up To

This past Tuesday, three Red Hawk Robotics students presented at our sponsor PTC’s annual user-conference LiveWorx. After applying during the fall, we found out we had been accepted to speak practically the day of Kickoff. As the first high school students to ever present, we worked heavily with the PTC for Education team (big thanks to @KenzieB, @Drew_B, Alyssa, and many others) during the build season to refine our presentation and overall message. We were among the 7,500+ attendees from the STEM industry, and one of the over 200 presentations.

For anyone unfamiliar, Onshape is a cloud-native CAD software centered around the idea of (“big A”) Agile product-development (vital to FRC) and integrated PDM (via Arena). It was acquired by PTC in 2019. Both 2713 and Melrose High School have been using Onshape since COVID, and it has greatly contributed to our team’s growth over the past two years. We treated our presentation like a case study — we explained what FRC is, how and why we use Onshape (focusing specifically on in-app collaboration, the Agility provided by version-history and branching, and community-created resources), and lastly the impact of our team’s work through both on our students and the STEM community. We also watched presentations/demos on a variety of topics — sustainability in different stages of product development, a 5-axis CNC mill, Agile design through Onshape and Arena, “Digital Twins” and how Assassin’s Creed was the primary design reference during Notre Dame’s reconstruction, an automated boba machine (Bobacino, if anyone is wondering), and many more.

You can watch our presentation here:

The first 15 minutes are us presenting, followed by a 10 minute Q&A session. After the allotted time, we also gave those interested a more in-depth explanation of our robot and showed it in action. You can also access a copy of our presentation here.

We all had a wonderful time at LiveWorx ‘23 (especially the mini-golf), and are extremely grateful to everyone at PTC for giving us this incredible opportunity and their support throughout every step of the journey.

Our three presenters (left to right) Gavin, Kate, and Addy


Hi all, long time no see. We have had a very busy offseason.

Summer Offseasons

We competed at two summer offseasons this year, BattleCry and Summer Heat. At BC, we were blessed to be able to play alongside teams 333, 2791, and 126. We have two alumni of 2791 mentoring 2713, and a former 2791 mentor on 333; 333 was also the first team that 2791 won a regional with back when we were all together in 2017. While we only went 1-2 in the 16 alliance bracket, we were quite happy with our alliance’s performance and it was a blast being able to play alongside so many close friends again.

At Summer Heat, we got a favorable qual schedule (rare) and capitalized on it, going 4-1 in quals and seeding 4th. We managed to partner with 6328 and 8023 and … we took home the team’s second ever event win; the first ever event win as a captain or first pick. This was a truly special moment for us, and we are so glad we got to share it with 6328, who has helped us so much along the way. 8023 was an incredible alliance partner, full of standup students and mentors. Thank you to all the event organizers who make Summer Heat a blast year over year, and thank you to all our competitors who congratulated us on the victory - 3467, 6329, 319, and more.

Fall offseasons, maintenance, and upgrades

We’ve competed at 3 additional offseasons, with 1 left to play this weekend.

There is something weird about our robot that causes it to absolutely plow through tread. We’ve redone offsets a bunch, we are optimizing modules to not ever rotate more than 90 degrees, and we are generally dumbfounded at why we speedrun our tread. During the season, we would get approximately 1 event + 2-4 hours of drive practice before our tread would no longer grip our practice carpet (which is not in good condition anyhow) and we would simply spin our wheels in place. We got a suggestion to try to prevent modules from fighting each other and generating scrub, which can be done with either of these classes from 254 or 604 (we haven’t gotten to this yet, WIP). We decided to run our first two fall offseasons on Colsons (what’s old is new again) - the option of not having to replace tread is enticing and we’d like to get some data before running Colsons in season next year.

Mayhem in Merrimack

This event started out a bit rough but got better over time. After switching to Colsons, we did not have time in the shop to loctite everything back together; we figured we would do that in the morning at the event. Unfortunately, we crossthreaded one of our forks while doing so, and were naturally scheduled for QM1. We ran QM1 on 3 modules, but it honestly didn’t seem to make much of a difference. Unfortunately, we also broke our four bar this match, and were unable to score for a good portion of the match.

Screenshot from 2023-10-30 09-39-59

The problem area occurs on this piece of polycarb that is bolted to the pulley; the force of hitting the human player loading area causes a load onto the polycarb arm, which has too high of a ratio to be gently backdriven, and it generally snaps right above the bolt circle. This happens maybe once every two events. We opted to cut identical parts out of thin aluminum and create a alu-polycarb-alu sandwich of thinner plates (that add to the same thickness) for the next time this happens.

In another match, we experienced an issue where our top roller would not spin when commanded. We figured we may have smoked the NEO 550 powering it, even though we have a 20A current limit on it.

We opted to remove the UltraPlanetary from the system and test the intake, to verify that the motor was indeed smoked. We found that once we removed the UP from the system, the motor spun just fine. We put it back on and the intake worked. We are lost here. We assume that something might have happened in an impact where the pulley-pulley center became too large and the tension was too much for the 550 to overcome, but we don’t know for certain. We haven’t had an issue like this all year other than this one instance. Due to the short match turnarounds, we opted not to replace it (since I’m not sure if we even have a spare UP), and it hasn’t been a problem since.

The rest of the event went largely uneventfully. We got to play with some awesome partners in 8708, 1729, and 5962.

New England Robotics Derby

Unlike Mayhem in Merrimack, we started off well and slowly descended into madness. In one of our later qual matches, our Pigeon 2.0 failed to return any data at all for the entire match. We took it back to the pit, powered the robot on, and it worked totally fine. We did some pull testing on some wires and found nothing. The fuses looked fine. We don’t know. We added it to the pre-match checks, but it didn’t happen again that event.

In our first playoff match, we snapped the 72T belt connecting our MAXPlanetary to our four bar arms. We did not have a spare; nor did anyone else at the event. In order to get the robot working again, our only option was to replace the pulleys with sprockets, and put a 25 chain in.

Unfortunately, this was much harder than it sounds. A tube nut in our dead axle MAXSpline setup stripped; this meant it was borderline impossible to remove the bolt holding the MAXSpline in place, which is required to get the pulley off. To make things worse, both sides stripped.

We’d like to thank 6328 and 3467 for providing spare parts for us to help with this emergency swap. Unfortunately, we missed a playoff match for this. We barely made it for our alliance’s third playoff match, where the chain did indeed work for the first bit of the match. Unfortunately, due to time constraints, we were not able to get the chain perfectly tensioned - it was a bit loose. Shortly into the match, we drove over the charge station as we normally do, and the impact caused the chain to jump up and off the sprocket, rendering our four bar useless again. We’d like to thank our partners 3467, 4041, and 8724 for their patience through all this :melting_face:

The Colsons at both events went completely uneventfully. :+1: from me.

After the event, back at the shop, we investigated the Pigeon power issue. We were able to replicate it, but we weren’t sure how. We began to swap out the Pigeon for a new one, but the new one experienced the same issue (it would not turn on). We swapped out the fuse, and it seemed to work on the first power-on. We powered off again, powered on, and the Pigeon didn’t turn on. Weird.

We noted that the issue was always resolved when we completely broke the path between the battery and the pigeon (either via disconnecting wires or disconnecting fuses). We noticed that the issue usually happened after quick power-off, power-on cycles.

We swapped the Pigeon from the switchable channel on the REV PDH to a different port, and the issue stopped happening. We do not use the switchable channel at all in code. I’m out of ideas other than avoiding the switchable channel for now.

River Rage, or how I learned to stop worrying and love the Colson

We were lucky enough to get some of the 88 TPU90A printed wheels to try out for River Rage and Battle of the Bay.

These were version S44 and were printed on a Fuse1+. We were all very excited for this, but unfortunately it didn’t pan out the way we expected.

Our first match, we were accidentally limited to 6 ft/sec drive speed, because we were re-tuning modules in a very confined space before the event, and we forgot to re-up the speed. This was simply my fault. The wheels held up great though!

After our third match, things got worse, we began to notice some tearing in the tread layers, like so:

Unfortunately due to quick turnarounds, we didn’t have time to replace it then, and we just ran on it. After our fourth qual, one of the treads began to completely separate.


We were scheduled for a replay immediately after this, and the field crew was generous enough to give us ~10 mins to replace the wheel fieldside, and thanks to some added muscle from 319, we replaced it with another TPU wheel.

After this match, going into playoffs, we noticed all wheels were starting to tear from the sides, like the earlier picture. We had brought 8x S44 wheels, 4x V40 wheels, 5x Colson wheels, and 6x treaded billet wheels. We had an uneasy feeling we weren’t going to last through playoffs. We went out for our first playoff match, and during the match, the tread completely fell off one of the wheels:

At this point we had replaced 3 wheels over 6 matches, and we did not have enough to sustain playoffs. We swapped all 4 wheels for Colsons in the span of 6 minutes and made it to our next match with plenty of time to spare. This was one of the fastest pit crew ops I’ve ever seen, our pit crew did an amazing job here.

We ran the following 2 matches on Colsons with no issues whatsoever.

Us and many other teams were generally confused at how we were going through these at a rate unlike anyone else. Other teams have ran these S44 wheels without issue for multiple events. What makes ours different?

Our current hypothesis is that our azimuth is spinning too slowly. This causes wheels to experience occasional side-loaded drag. The friction between the wheel and the carpet is too great for the webbing of the wheel to handle, and the webbing shears. This manifests early on as the rips from the side shown earlier, and eventually causes the tread to completely separate from the wheel once all the webbing has sheared. Our azimuth rotates 90 degrees in 0.25seconds; it’s powered by a NEO with a 20A current limit. This does not seem out of line with other teams, but I am interested in hearing more data if any teams are significantly faster. (We can make it faster by increasing kP and adding kD, but we got lazy and 0.25s seemed fast enough, but maybe we are misinformed.) I’m guessing that the reason these wheels fail is the same reason we plow through normal tread so quickly, but I’m not convinced we know the problem just yet.

You can find a full album of wheel damage here:

2024 and beyond

We would agree with you that we haven’t posted much here. Frankly, Open Alliance is a lot of time. I generally put about 2-3 hours of effort into each blog post, and I think there is lots of room for improvement still. Adding 2-3 hours of public documentation is a lot more work than it sounds, and it adds up. I can say with certainty that I lost a nonzero amount of sleep during build season this year in order to get OA blogs up - and this is just with once a week posts.

It is important to recognize the capabilities of your team. We stress the importance of “the bucket” to our students quite frequently. Some teams have bigger buckets than us, some teams have smaller buckets than us. Some buckets are differently shaped. That’s all okay. However, we feel that maintaining what we consider to be an informative, helpful, and high quality OA blog does not currently fit into our bucket.

It is for these reasons that we are unlikely to return with an OA blog in 2024. We are certainly not against sharing anything, and if you’d like to know what we’re working on, you’re more than welcome to ask. Things may change over the next couple of months, but right now, we’d rather look to fill our bucket more with internal growth - on the robot, on processes, and on training - rather than public-facing documentation.

I am thrilled to see so many teams signing up for OA; just remember that your sanity, mental health, and team goals should come first.