FRC 6328 Mechanical Advantage 2022 Build Thread

Good question. Here’s what we’ve experienced from a mechanical and software perspective:

Mechanical (@JackTervay)

This year we went with a belt-driven hood actuation instead of a rack and pinion mostly due to space constraints. The shooter & tower were originally designed for a 2 position pneumatic hood, so we didn’t have the foresight to leave much room behind the hood for any activities, such as a rack and pinion. Additionally, the belt-driven actuation didn’t require a ton of parts to be redesigned which helped us save some time on manufacturing & assembly and gave us more time to tune it and rigorously test it in practice.

We were initially using a 3D printed pulley here because we needed an abnormal tooth count to account for improper belt tension (a manufacturing issue). That plan changed when we found that the pulley teeth were being crushed — this was probably a combination of this pulley making the belt too tight and also the high amount of torque at that stage of the reduction. To circumvent this, we ended up using an aluminum 18T pulley and adding a makeshift tensioner.

If we were to design a motorized hood in the future, we would more than likely try to go with a rack and pinion. From what I’ve seen and heard, it appears that the rack and pinion hood designs are generally pretty robust and avoid some of the issues that our hood design had this year. It seems to be what most teams are doing with motorized hoods nowadays, so there are a ton of great designs out there that we would shamelessly copy (1678 2022, 2910 2020/2022, 254 2020/2022, 4414 2022 to name a few).


It’s worth noting that the rack and pinion hood designs become a bit different if your shooter is atop a turret, especially for a game like 2022 where your lowest hood angle is probably pretty low. The rack doesn’t really have much room to exist since it can’t interfere with the turret feeding path, which greatly reduces the amount of rack that is available to move. As far as I can tell, I think this is why 254 and 4414 have their rack as the stationary part and located at the “edge” of the turret, while the pinion is the moving part (which is different than the “typical” design where the rack is attached to the moving hood, like 1678 2022). You can see this in 4414’s & 254’s behind the bumpers.

4414 Behind the Bumpers
254 Behind the Bumpers

I know @saikiranra might get upset with me with all this turret talk, but I figured I’d mention it just in case :wink:

Software (@jonahb55)

From a software perspective, our main concern has been maintaining an accurate measurement of the hood’s position based on the NEO’s encoder. Backlash in the system turned out to be less of a concern than we expected since the weight of the hood was sufficient to keep it in tension. However, we had some issues at Greater Boston where the belt would slip by one tooth on the small pulley. This caused an error of ~10° in the hood measurements.

We initially noticed this because of some significant errors in the robot’s vision pose measurements, which rely heavily on the hood angle. Starting midway through many matches, all of the measurements would be several meters too far from the target. After further investigation, we discovered that the individual corner translations didn’t form a very accurate circle for the odometry pipeline (left image below). Shifting the hood angle by 10° in replay fixed the issue completely (right image below). By pressing down on the hood, we discovered that it could slip relatively easily — the difference in angle was almost exactly 10°.

As if that evidence wasn’t sufficient, here’s an example of what the calculated poses looked like before and after the change. The solid robot is the original pose and the translucent robot is the corrected pose (along with a screenshot from the match video).

Logging sidenote

This is yet another example to add to our list of “problems that were solved many times faster because of logging.” The code for the circle fitting visualization above was written after Greater Boston, and we were only able to run these tests because we could replay the matches in simulation with extra features added (including shifting the hood angle). Plus, careful version control at events (see the repo link below) allowed me create the visuals for this post by replaying the exact version of code running during this qualification match at a previous event, despite numerous code changes since then.

See also: EventDeployExtension

At Greater Boston, we homed the hood once at the start of the match by applying voltage downward until it reached the hard stop (which we detected based on when the speed dropped below 1°/s). For Houston, we fixed the slip issue by automatically re-homing the hood a few seconds after each shot. Here’s an example from one of our matches — the robot is to the left of the field, and you can see that the hood briefly runs to the minimum angle several seconds after we take the shot. This strategy proved to be an effective workaround, but the issue of maintaining accurate angle measurements is definitely an important consideration for any hood design.

Hope this is helpful!



After a successful season, we have not let our foot off the gas and are bringing our new second robot to our second offseason of the year, Summer Heat. We are proud to present Duck Bot:

  • Six(6) Neo Motors
  • Two(2) Rubber Ducks (Varying sizes)
  • Seven(7) unique voice lines
  • Two(2) liters of onboard copium
  • Wood Based Construction Techniques

We spent a lot of time/effort testing types of rubber ducks this year and believe that this specific rubber duck provides us a competitive advantage. As such, we don’t want to share this specific detail about the robot until after the off-season concludes.


Can’t wait

1 Like

Why don’t your specs reveal how many total pool noodles are used? What are you hiding? The community has a right to know!

1 Like

We are happy to share that outside of the bumpers, TWO pool noodles were deployed on the robot.

Also, our large ducks migrated in from here and here in case anyone is looking for a new duck.

Important Lessons Learned

  • Hot glue (generously applied) is preferred over VHB for keeping your duck from flying away during a match
  • Spinning the robot too fast in a celebratory auto-spin makes the duck dizzy
  • Duck noises (which were added to this robot after the video footage, sadly) need MAJOR amplification to be heard on the field. If we do this again we need More Power™ than our current small onboard amplifier and speaker can muster.

Potential Future Improvement

  • Consider a Field Oriented Duck where no matter how the robot turns, the duck automatically rotates to stare intimidatingly at the opposing alliance.

All joking aside, we had a great time at Summer Heat - thanks to Teams 58, 172, and 5687 for hosting, Team 6329 & 2648 for a great alliance, and a special shout out to Team 319 for picking Duck Bot to play on their alliance with 133 & 501. And congratulations to 131, 3467, 6153, & 8046 for their well-deserved event win!


Software Update #10: Automated Attendance With AdvantageTrack

Three years ago, we developed a custom system for tracking student and mentor attendance at our shop. It combined a manual and automatic sign-in system, where members who registered a phone or laptop would be tracked automatically via Wi-Fi sniffing. This approach proved very effective, and it means that most of our members never need to think about recording their attendance at meetings. This also makes the data more accurate than a purely manual system, because fewer people need to remember to sign in and out. Unfortunately, the existing system had several major issues:

  • It consisted of several disorganized and bloated Python scripts, making the whole system difficult to set up and maintain.
  • The Wi-Fi sniffing required specialized hardware, and we ended up using three Wi-Fi interfaces which behaved inconsistently. This approach also required a complex network configuration which was often unreliable over long periods.
  • The data was stored locally and required a complete record of each device detection. The size of this database quickly became unwieldy, and it was difficult to access the data/configuration remotely.

With these issues in mind, we’ve rewritten the system from scratch to make it easier to set up, maintain, and use.


AdvantageTrack includes all of the features of our old system, including automatic device tracking, a manual sign-in interface, and visualization tools. In addition to having a cleaner and more well-structured code base, it makes two key changes — an online database / management system and use of network monitoring instead of Wi-Fi sniffing.

1) Online Database / Management

The configuration, data, and visualization tools all live in Google Drive and Google Sheets rather than the local database and server. This means that the data is accessible from anywhere, and the system can be managed remotely, including adding new members, registering devices, and updating the list of backgrounds. Below is one of the main visualization sheets, or you can look at the full spreadsheet here.

Managing administrative functions largely contributed to the bloat of the server for our last system. With all of that managed by the Google Sheet, the local server can be massively simplified. It runs on a computer in our shop to handle network monitoring and manual sign-ins, but the only configuration it requires is the credentials to connect to Google. The computer also displays a rotating set of backgrounds from a folder in Google Drive. Here’s what our local setup looks like:

2) Network Monitoring

We’ve replaced the Wi-Fi sniffing of our old system with network monitoring through periodic flood pings. This has the benefit of requiring nothing more than a network connection to function, meaning AdvantageTrack is fully cross-platform. Since all of the devices we’re monitoring are connected to our network anyway, there’s almost no benefit to full Wi-Fi sniffing in this situation (also because of randomized MACs, see below). Most devices are easily tracked, including phones that remain asleep throughout most of a meeting. Based on our testing, it’s relatively effective to detect these devices when they connect to the network for background tasks. Plus, any phone or laptop that’s awake will stay online and can be tracked very reliably.

The key restriction of this automatic monitoring is that it requires a dedicated network where member devices won’t connect outside of meeting hours (and where ICMP echo requests are allowed). This makes it impractical on many school networks, but ideal for teams like us with an isolated build space and Wi-Fi network. The configuration also allows us to adjust the ping rate and thresholds for automatic sign-outs, in order to minimize network traffic. The system will also reduce the ping rate for devices that it has detected recently, so individual devices will typically see a single ping request every 5-10 minutes.

One challenge we faced is that both iOS and Android use randomized MAC addresses by default. This reduces the effectiveness of Wi-Fi sniffing over network monitoring since miscellaneous Wi-Fi traffic is very difficult to identify. Members can register new devices through the local server (see below), and it will guide them through disabling randomized MAC addresses for our network. That setting only affects a single network, meaning there is no impact to device security on other networks. This step is required because both operating systems will reset the MAC address used on a network under some conditions.

The registration works by asking members to scan a QR code that connects them to the local web server. It uses ARP to find the device’s MAC address based on its IP address, and confirms that the device isn’t using a randomized address (this is surprisingly easy to check).


We hope that this system might prove useful to other teams struggling to reliably track attendance at meetings. As mentioned, the local server is fully cross-platform and doesn’t require any specialized hardware. We use an Intel NUC running Linux, but it would work equally well on a Raspberry Pi, Windows machine, Mac Mini, or similar. Access to Google uses the free version of Google Cloud, which you can set up using any existing Google account.

Here’s the link to the AdvantageTrack repo, which includes instructions for setting up the system:

P.S. We’re currently in the midst of starting our technical and business training programs for the offseason, so we’re hoping to share some more details of our approach before too long.


Looney Tunes


There seems to be an uncanny resemblance here. :thinking:

Over the past ~10 days, teams 3504 (Girls of Steel), 3260 (SHARP), and 4467 (Titanium Tigers) have been running a summer camp to introduce new students to FRC. We are honored that their chosen project was to build a clone of our robot Bot-Bot Awakens! A few of us happened to be in Pittsburgh and were lucky enough to visit and see it in-person. All of the students have been doing some super impressive work to put this together on a tight timeline, and it will be competing at WVROX this weekend.

Here’s a video of the robot in action. It worked on the first try, of course :slightly_smiling_face:

And a few more photos:

Thank you to 3504, 3260, and 4467 for inviting us! Everyone at 6328 will be cheering you on this weekend.


Posting this here in case it catches the eyes of folks who will be in our area (30 mins NW of Boston) for work or school this year. Our CAD/Design team is looking for some additional hands to help out this season, including fall training. Thanks!


Got this working this past week and love it! Thanks for sharing!

If anyone has some easy guide on having this run at start up on a pi, I’d appreciate it. It looks like there’s a number of ways to do that sort of a thing, and I’m sure I can sit down and figure it out sooner or later, but figured I’d see if I could save myself the time by asking.


Glad to hear that it’s working well for you! I don’t any specific recommendations for Raspbian, but here’s what we do for auto start if it’s at all useful. Our setup uses Xfce, which has a convenient graphical interface for setting startup application/scripts through the settings pane. We also configure LightDM to automatically log in at the correct user. The Xfce login script looks something like this (I don’t have the actual script on hand ATM). We run the main script in a terminal window just for ease of debugging.

exo-open --launch TerminalEmulator "source AdvantageTrack/venv/bin/activate; python AdvantageTrack/"
firefox -url "" &

we’ve been trying to Run AdvantageTrack the past month or so. After some initial success, we seem to be hitting some snags

  1. the application seems to not be signing some folks in or out anymore, and is very slow (to be fair we have about 120 people on our Config-People page).
  2. We sometimes get a “RATE_LIMIT_EXCEEDED” warning, which you’d think is related to why I can’t sign people in and out, but is not.

I don’t think it’s from my general config settings, but maybe it is:

We’re currently running off of a raspberry pi, and I have the timeout so long because the network doesn’t have a good connection for kids in the machine shop. I’m thinking I’m going to basically remove the auto-sign in capability (increase the ping delay, and delete data from the data - devices tab) and also try to run it off of a laptop instead of a raspberry pi to see if that helps, but if there are any other suggestions I’d like to hear them.

I tried to request a higher quota limit, but had issues doing that and I don’t think that should be necessary anyway.

1 Like

I’ll throw out a few things if they’re useful for reference; we run AdvantageTrack off of an Intel NUC with some variant of a core i3, and it’s connected over WiFi. We have 125 people registered (67 active) and 110 devices. Based on that, I don’t see any immediate issues with your setup. Just a couple of questions:

You say that some folks can’t sign in or out. Does that apply to everyone or just some people? If it’s everyone, that suggests there’s a problem with the Google Sheets connection. For example, corrupted data in the sheet could have a that effect. I’d start by comparing your copy to the template sheet in case anything was changed accidentally (headers, extra blank rows, etc). You might also try linking a blank copy of the template, adding a few people, and seeing if you have the same problems.

What makes you say that the “RATE_LIMIT_EXCEEDED” error is unrelated to the other problems? What have you observed the triggers to be? If the system tries to sign someone out automatically and fails, it will repeat it periodically on every ping cycle. I know we’ve hit the rate limit before when 10+ people get stuck in a similar loop. Is there any evidence of that in the console log?

You’re correct that you shouldn’t need to use a higher rate limit. If it’s useful, these lines define the times (in seconds past the minute) that various periodic reads from Google occur. You could try decreasing the frequency, though if it’s stuck in a loop as I described then this may not change much.


The main reason I say it’s not related is because when we are getting the Rate limit Exceeded error it’s clear that the google sheets connection is down. There are times that we don’t get that error for extended periods of times and we still have trouble logging people in and out. I guess in reality that does not mean it’s not related.

Is the best way to manually log people out and “disable” the auto-sign in to just delete some of the data-devices rows and data - records rows?

EDIT: something else that is also then likely related is that we don’t have the wifi enabled all the time. Knowing now it can get in a loop trying to sign people out and exceed a rate that way, I bet the wifi going out is probably a potential root cause of that. For reference, our meetings are from 5-8 on weds and 1-6 on Saturday, but the wifi is on from 4 PM weds - 5 am thurs, and then Saturday morning through Sunday night. So I would still think that it signs everyone out during that time, but could see those things being related

1 Like

That should work, especially if you just want to disable it for some people. You can also change this line in the script to turn off the network monitoring entirely:


If you can’t beat em, join em

During this offseason we have been busy with a bunch of training programs, as well as developing our new swerve test bench, affectionately named “Crab Bot.” We opted to use SDS MK4i L2 Modules with in-corner mounting and Neo Motors. The CAD for this test bench can be found here. It will be battle tested at The New England Robotics Derby this weekend. We plan to continue with these modules going into next season, game permitting.

We also have a reveal video for this robot which can be found below:

Many thanks to the following groups:

SDS - For making some awesome and robust modules
FRC 3467 - For the climber in a box
FRC 4909 - For allowing us to bring it to an event before next season
All the swerve teams that beat us on Einstein

If you have any questions about the robot, please ask here, or shoot me a PM. There’s a software post being cooked up.

Looking forward to moving sideways for the foreseeable future.


Software Update #11: Learning to Crabwalk

No crab is complete without its code. Or something like that. We’ve been working on swerve code since early August as part of our offseason training for software, which means that a good chunk of this code was created by some of our newest students. Thank you to everyone on 6328 who dedicated their time to this project!

The Basics

All of the code is in our SwerveDevelopment repository on GitHub (the drive subsystem is here). The code structure closely resembles our 2022 competition code, including full integration with AdvantageKit. This means that the hardware interface for each subsystem is separated from the control logic, allowing us to accurately replay matches using the WPILib simulator or seamless switch to a physics simulation (more on that below).

The swerve support uses WPILib’s built-in kinematics for most functions with feedforward/feedback controllers running on the RIO. We chose to bypass the onboard controllers on the SparkMaxs for a variety of reasons — this simplifies the code running the controllers (no need to artificially wrap turn setpoints), it makes it easier to run with simulated modules, and we can use our new derived velocity controller for the drive motors (see the section below for details on that). We haven’t observed any performance issues with running the controllers on the RIO.

Odometry is one area where we deviate slightly from WPILib’s built-in kinematics. The supported approach for 2022 has been to update odometry based on the velocities of the modules (running the kinematics in the reverse direction of when driving the robot). Instead, we’re using the position delta of each wheel, which reduces the amount of drift over extended periods. Our odometry code is here — we just use the kinematics object but feed it a position delta instead of a velocity, then modify our pose using a Twist2d. I believe WPILib is planning to make position delta odometry the default for next year.

Enough babbling! Here’s a video of the code in action, comparing a video to the odometry and module states. The red arrows show the measured states and the blue arrows show the setpoints.

Generating a feedforward model and tuning the controllers is especially important for swerve, and we have a couple of tools we’ve developed to help with that:

  • We rely heavily on our TunableNumber class for values like FF and PID gains. These values act like constants when running on the field, but can be manipulated using NetworkTables when the robot is running in tuning mode.
  • Last year, we wrote a FeedForwardCharacterization command that generates values for kS and kV using a the same technique as SysId’s quasistatic tests. Integrating this function into our code allows us to recharacterize quickly without needing to configure SysId and deploy new code. It also means…
  • We can run the characterization routine on the drive motors by just controlling the turn motors to zero degrees. No need to use blocks or rely on brake mode for the turning motors. Our drive subsystem has a characterization mode to support this use case.

Trajectory Following

Our preference has always been to define trajectories in code using transformations (rather than relying on a GUI tool). Last year, we created a FieldConstants class that defined lots of useful reference points on the field:

All of our trajectory waypoints were defined relative to these reference points (example). Defining these waypoints in code also allowed us to extensively reuse waypoints rather than redefine full trajectories for increasingly complex autos. You can see that in action with our five cargo auto, which uses waypoints from our two, three, and four cargo autos (code here).

We wanted to continue defining trajectories in code for swerve, but WPILib’s tools don’t provide a complete solution for controlling the holonomic rotation along a path. We created our own set of trajectory classes for swerve to address this issue, which can be found here. Let’s look at an example trajectory definition:

CustomTrajectoryGenerator.generate(config, List.of(
    new Waypoint(new Translation2d(0.0, 0.0), null, Rotation2d.fromDegrees(90.0)),
    new Waypoint(new Translation2d(2.0, 3.0), Rotation2d.fromDegrees(90.0), Rotation2d.fromDegrees(-90.0)),
    new Waypoint(new Translation2d(2.0, 6.0), null, null)

Each waypoint has three components defined in the constructor — a translation (required), a drive/velocity rotation (optional), and a holonomic rotation (optional). Since both rotations are optional at every waypoint, the generator will automatically combine quintic and cubic splines for the drive path, then use an S curve for the holonomic rotation. We also have shortcuts for generating waypoints based on poses. For example, moving from one pose to another on swerve (following a straight line) looks like this:

CustomTrajectoryGenerator.generate(config, List.of(

Our CustomHolonomicDriveController can follow these generated trajectories, including correctly handling the holonomic rotation feedforward. Here are some key examples from our swerve code:

  • Setting up the config and generating the trajectory (here)
  • Running the drive controller (here)
  • An auto routine using our trajectory following command (here)

You can see the trajectory follower in action at the end of this video:


Last year we relied very heavily on simulation to test drive code, auto routines, driver assist features, and more. We knew we wanted to support this on swerve, but creating a full swerve simulation becomes complicated very quickly. Instead, we opted for the simpler approach of independently simulating each drive and turn motor using the FlywheelSim class. With the hardware interaction separated from the drive subsystem, enabling the simulator is as simple as swapping our ModuleIOSparkMAX implementation for ModuleIOSim.

Another problem we needed to solve was simulating the gyro sensor. The angular velocity of the robot can be calculated using the measured module states (the outputs of the module sims) and applied to the pose, but this approach is suboptimal on the real robot. In that case, we want to rely on the gyro because it’s more accurate over long periods. We set up the odometry system to automatically switch between these two approaches depending on whether a gyro is connected (code here). The sim simply doesn’t provide a gyro hardware implementation, meaning the “disconnected” case is used. A nice side effect is that we have a fallback if the gyro is ever disconnected on the real robot.

This system works very well to check that the drive and turn controllers are behaving correctly. It can even approximate some more “complex” behaviors like realistic acceleration (making this work mostly involved adjusting the flywheel sim parameters until the results looked reasonable). Ultimately, we feel that this approach achieves everything we require of the simulator despite not being physically accurate.

Here’s a video of a five cargo auto running in the simulator:

Derived Velocity

While we have generally found great success with the SparkMax/NEO ecosystem, the fixed velocity filtering on the internal encoders has been a distinct low point. For those unaware, the SparkMax filters the velocity measurements from the internal encoders, resulting in a ~100ms delay in readings. This makes it more difficult to tune aggressive feedback controllers. While we’ve been able to work around this with well calibrated feed forward models, eliminating the latency would make it much easier to tune the velocity controllers for mechanisms like flywheels and drivetrains. We created a class that replaces the built-in PID controller on the SparkMax and eliminates this measurement latency:

You can find the code here: SparkMaxDerivedVelocityController

This works by running the controller on the RIO in a notifier thread while calculating the velocity based on the SparkMax’s position measurements (which are not filtered). Running that calculation requires accurately timestamping the measurements from the SparkMax (data which is not provided by REVLib), so we manually read the CAN frame for position from the SparkMax. Using a notifier allows us to run the controller faster than the loop rate of the main robot code, though for most applications a 20ms period appears to be sufficient. You can also manually adjust the level of filtering on the data, since completely unfiltered velocity data can be extremely noisy.

Here’s an example of velocity data from our swerve drive with the SparkMax’s measurements in yellow and our derived ones in blue:

We also tested this system on the flywheel for our 2022 robot. We shot two balls in succession after raising the kP value until just before it oscillated at steady state. Here’s what the internal (filtered) controller on the SparkMax did:

After the first ball, it overshoots the setpoint of 3000 RPM by several hundred RPM (blue is our derived value). The second ball is then fired with the flywheel running too fast, which could affect its trajectory. The controller overshot the setpoint because the data showing that flywheel had reached the setpoint was delayed, and so it continued to apply voltage after it should have stopped. In contrast, here’s the same test with the controller running on the RIO using our derived value:

The flywheel speed recovers perfectly after the first shot and is stable at 3000 RPM, because the controller is able to respond more quickly. You can find the implementation of this controller for our flywheel here, and (as shown before) we’re also applying it to the drive velocity controllers on our swerve bot.

Feel free to reach out with any questions, and keep an eye out for more software updates soon.




Nice work! A couple thoughts for people who want to follow suit on some of these innovations:

This was actually just merged into WPILib earlier today.

PathPlanner seems to have in-code generation in the new beta. I haven’t tried it yet, but it seems like you have a more fleshed-out API.

Did you run into any thread-safety or other pitfalls with this system? Would they be eradicated if the code was run in the regular robotPeriodic loop?


Seems like the relevant bits are the generatePath method and the PathPoint class. It’s definitely a nice option, though it seems to lack some features like optional rotations (very nice when we don’t want to optimize everything by hand) and advanced constraints (our API uses WPILib’s trajectory config/constraints, which are quite flexible).

The class is written to be thread-safe using synchronized blocks, and we haven’t encountered any issues there. The main thing to keep an eye on is CAN utilization if it’s configured to use a very short period (the period for status frame 2 is updated to match the controller period, though it only contains the position value). The default rate for that status frame is 20ms anyway, and we haven’t observed any issues running it on four motors for swerve.

1 Like

How did you get your FlywheelSim constants, particulary for the turn motor? Did you just characterize the values using WPILib’s SysId or did you use a different method?


The gearbox and gearing values (the first two constructor options) are known based on the hardware, so they don’t require tuning. The moment of inertia (the last argument) is trickier because we’re trying to roughly simulate the dynamics of an entire drivetrain — for example, using the theoretical moment of inertia for just the drive wheel would result in wildly unrealistic behavior. Our approach was to tune the moment of inertia values manually until the controllers could converge in a “reasonable” length of time during sudden acceleration (a fraction of a second for the turn motor and roughly a second for the drive motor). You might be able to calculate those values based on the physics of the drivetrain, but we didn’t bother because this simplified simulator is only meant to be a rough estimate in the first place.