FRC 6328 Mechanical Advantage 2022 Build Thread

6328 Kickoff Plans

As kickoff rapidly approaches, we have been finalizing our plans for how we are going to approach this year’s kickoff. We are going to be making heavy use of this worksheet. This worksheet was made to work best with our system for kickoff this year. It took heavy inspiration from The 2791 Kickoff Worksheet, as well as this 1678 Strategy Training Worksheet. Below is our schedule for Kickoff weekend:

Worksheet optimized for online use
Worksheet optimized for printing: 6328 2022 Kickoff Worksheet.pdf (160.7 KB)

Day 1

  • Watch the kickoff livestream

  • Break into groups of 5-6 students

    • Premade groups to balance experienced students with less experienced and rookie students

    • Read the manual as a group, initially skipping over the heavy technical and dimensional parts of it

    • Complete Day 1 Part 1 of the Worksheet. This section focuses on understanding the rules of the game.

  • Regroup as a full team and work to combine all the groups’ answers into one finished set of answers for that part

  • Repeat steps 2 and 3 for Parts 2 and 3 of the kickoff worksheet

    • Part 2 focuses on extrapolating more understanding of the game beyond what is written in the manual.

    • Part 3 focuses on calculating maximum scores for both an individual robot and an entire alliance in different stages of the match.

  • All members starting the 1678 rules test with the expectation that they will pass it before the start of Sunday’s meeting.

Day 2

  • Review rules test status and anything major we missed the previous day

  • Break back into the smaller groups and create a list of actions that can be done in the game. This can be found in Day 2 of the Worksheet.

  • Regroup as a full team and work to combine all the groups’ lists into one finished list

  • Repeat steps 2 and 3 for turning the list of actions into a MoSCow sort.

If your team wants to follow our approach to kickoff and you want more specifics about how we are doing it, feel free to shoot me a pm. We will be posting our answers to the worksheet on Saturday and our MoSCow sort come Sunday.

Covid Precautions

As a team, we recently made the hard decision to pivot to a fully virtual kickoff. With the current state of case rates and hospital capacities in Massachusetts, having the entire team in the building at once no longer made sense. With such, we will be making heavy use of Zoom and its breakout rooms feature for kickoff weekend. In order to mitigate spread during our time in the shop following this weekend, we have developed policies, which a summarized form of can be found below:

  • Masks must be worn at all times.

    • Preferably KN95/N95
  • Meetings are scheduled to avoid full meals. Snacking and drinking can be done in a room alone, outside, or in one’s car.

  • Limited attendance and attempts to minimize contact between groups when possible

  • Contact tracing tracked through sign up spreadsheet and attendance system

  • If one tests positive they must isolate themselves for 10 days from symptoms or positive test, whichever comes first

  • If one is deemed a close contact (at robotics or otherwise) they must isolate themselves for 5 days assuming no symptoms develop or a positive test

Going into Week 1

Following kickoff weekend, we plan to move to a hybrid approach with some subteams meeting in-person and others remaining virtual when possible. Our goals for Week 1 can be found below:

  • Prototyping
  • Building field elements
  • Attempt to finalize drivetrain CAD
  • Setting up basic code project structure/repository
  • Start work on scouting app

Obviously these are subject to change if the game (or Covid) throws us a curveball. As always feel free to drop us any questions or criticisms.

Good Luck, Have Fun, and Stay Safe


200 (1)


Question about this point. I’ve waffled on which side I’m on with this debate.

Is it better to separate experience with inexperienced? The inexperienced ideas and voices may get intimidated and quashed .

1 Like

This is a very good point and one that was a factor in our decision making here. A part of our group structure that I didn’t touch on was each group has a “leader,” whos voice isn’t more meaningful than the others but rather are students or mentors focused in strategy who can act primarily as facilitators of discussion. The goals of these “leaders” is to encourage and ensure that quieter and more inexperienced voices are heard. However, even without these leaders, I would still encourage a variety of experience in groups as inexperienced students often will struggle to be able to fully visualize how everything works in a game the same way a veteran student can. It very much comes down to emphasizing to student leadership the importance of the inexperienced voices in my opinion.




Kickoff Update

Today’s update will be a smaller one as today was focused on understanding the rules and we purposefully didn’t get into any action prioritization or robot mechanisms.

The team started out today before the livestream going through a quick presentation about some common game themes, the mechanisms used in them, and then a recent history of FRC games (thanks to @BordomBeThyName for the idea and some of the content). This was designed to give everyone, but especially new students, some base level knowledge as to know what to look back on for inspiration this year. The goal for day 1 was to have everyone understand the rules and scoring of the game such that tomorrow we can start coming up with our priority of actions.

After watching the livestream we started consuming the firehose of new information we received and filling out the kickoff worksheet Connor linked earlier. This was done in a combination of 7 different smaller groups and then we came back together at known checkpoints to form one main sheet as the whole team.

View the results of our group worksheet here.

Random Observations

  • The balls take a while to be recycled out of the HUB (7 seconds for high and 5 for low)
  • Having your intake handle bouncing balls will be very important, especially if primarily a high HUB shooter
  • The vision target seems much harder to use than in previous years

Kickoff Retrospective

  • Having the overview/history at first helped remind everyone of previous years and common themes to look for
  • The worksheet and preformed groups helped set a structure to keep things focused
  • We were still able to be efficient on Zoom; although it was less engaging, the use of breakout rooms made it easier in some ways for unique thoughts to be formed without being drowned out by the strongest voices
  • Engagement could be improved by finding more ways to get everyone participating, especially when coming back to share in the larger group

Next Steps

  • Everyone takes and passes the 1678 rules test (thanks 1678 for putting this together so quickly)
  • Tomorrow we will:
    • Come up with the list of all the actions possible in this game
    • Prioritize those actions into a MoSCoW prioritization of what our robot will do
    • Start thinking about the prototypes our team wants to start on
    • Start thinking about what other subteam will need to start working on

Expect a much more in depth update tomorrow when we actually get into the details of what our robot will and will not do.



Kickoff Weekend Summary

Today we wrapped up our initial analysis of RAPID REACT℠ presented by The Boeing Company. For the majority of today we continued yesterday’s intentional absence of mechanism talk, however at the end of the day we started looking at some examples of past subsystems that may prove as a useful resource going forward. Like yesterday, all of our findings were published in our Worksheet.

The first goal of today’s meeting was to break the game down into all the specific capabilities of a potential robot. We broke up into the same 7 groups as yesterday and each group looked back through the game in order to create their own list. We then regrouped and combined them to one master list. This master list was then used for each group to make their own priority list using MoSCoW Prioritization. We regrouped again and spent a very extended period of time developing one team priority list.

MoSCoW Priority List

All items in the same grouping are unsorted, as it is difficult and often a waste of time to apply such a fine sort to a game that has so little testing and prototyping thus far. All items marked with an asterisk means that item is very loosely placed in that grouping as we expect prototyping to have an especially large role in the feasibility of it. Click on each item for a brief description of what it encompasses and/or why it is placed there.

Must Have

Our robot will be doing these things

Drive, accelerate quickly, push robots in defense, Taxi


Intaking 1 Cargo off ground at time

Cargo will be most prevalent on the floor

Hold 2 Cargo

Reduces cycle time per ball

Eject Cargo

Dispensing Cargo from the robot in a very controlled manner, often would be used when accidentally having taken in an opposing Cargo

Scoring in Low Hub from fender

We believe that the low goal is non-optional for us, due to its low risk nature in terms of reliability, shockingly competitive point value, ability to contribute most effectively to the Cargo RP, and it being more difficult to defend.

Score cargo in lower hub from tarmac

This follows the same reasoning as scoring from the fender. In this case, the tarmac is primarily referring to about one robot distance away from the fender. This is to allow us the ability to score over another robot who is shorter than us who is occupying the space in front of the fender.

Climbing Mid Rung

It is a relatively simple climber, especially since we have experience with it in 2020 and there being multiple COTS options for completing the task. Being able to do this both allows us to do a L2+L2+L1 Endgame for the RP, or be the L2 in an L3+L2 Endgame for the RP.

Climbing Low Rung

It is more relevant to the discussion that all climbers should be capable of R1. Fitting 3 robots on one bar is a tall ask and 3 L2 climbers that cannot do L1 wouldn’t be able to get the RP.

1 Cargo Auto

Super easy. Easy 2-4 Points. Contribute to Cargo RP.

Should Have

Additional goals that are within reach but not mandatory

2 Cargo Auto

4-8 Extra points and contributes to a quintet potentially but unlikely without another 2 Cargo auto.

Active Settler

Being able to aggressively deal with the bouncy nature of the game piece will be important to reduce cycle time if either you or your alliance members are focusing the high goal

Scoring in Upper Hub from Fender

While the Lower Hub is definitely better for the RP when doing fender shots, in close matches and elims matches, assuming we can secure a high enough accuracy in our prototyping, we expect the Upper Hub to outscore the Lower Hub.

Scoring in Upper Hub from Tarmac

Similar to reasons for Upper Hub Fender. This however does give us versatility in scoring over a robot that may be occupying the fender at the time.

Using vision for targeting

When shooting from the tarmac this can prove useful. Also for odometry, it is heavily desired.

Go under Low Rung*

Approaching the Hangar from the front makes lining up much much easier. From early images, it is looking like the hangar is even more cramped than expected, so this may move up a tier.

Intake and Score on opposite sides*

Greatly beneficial for multi Cargo autos as you don’t have to turn around between intaking and shooting. Also potentially has smaller gains in tele-op for cycle time. This is flagged as highly likely to revisit as we do not know how much this may hinder other design aspects.

Could Have

Items that are options for additional add-ons if not already a desirable byproduct of an already incorporated design

Automatically eject opposing Cargo

Takes some effort off drive team for getting opposing cargo out of the robot.

Ejecting into Terminal

This will likely be a byproduct of another objective. However, we figure it has some use as a backup ability of the robot if a major mechanism fails and we have to scramble to make a dumper.

Intake 2 Cargo off the ground at time

Rare situation where two same alliance balls are right next to each other. Could be nice to deal with it if it happens, but it is unlikely to get use. Similar to intaking hatches off ground in 2019.

3+ Cargo auto

Not a primary priority however if we have the time, a 3 Cargo auto is hugely beneficial for getting a Quintet.

Climb High Rung*

Valuable points wise. Makes it such that a team only has to climb to L2 with you for the RP. Likely to move either direction following prototyping.

Climb Traversal Rung*

Very valuable points wise. Makes it such that a team only has to climb to L1 with you for the RP. Likely to move either direction following prototyping.

Get fed in auto

(Take in balls from partner in auto and shoot them) Could be useful in niche situations where one partner has a conflicting auto with ours and our other partner either doesn’t score or potentially scores low.

Block shots

Could be useful to have a fallback defensive ability to block shots of robots. Only effective way to defend a robot at launchpad.

Deliver to Terminal station in auto

giphy (5)

Catch Cargo

Could occasionally be nice to catch cargo after its first bounce from the Upper Exit. Quite niche and could cause issues with accidentally catching before first bounce.

Won’t Have

Items that we will not intentionally design to be able to do

Drive sideways



Intake Cargo from terminal

Last resort. It only got 1 Cargo off the bat.

Shoot into Upper Hub from Launchpad

Initially we thought this had the potential to be a viable scoring location. However, we soon realized that this is not a half field game and instead is a full field game. In this situation, this shooting location is only nearby to 1/4 of the field and still requires a large amount of precision to reliably make. Going to the Fender or Tarmac seems to make much more sense in most situations even ignoring the difficulty of the task.

Move on Rung

A lot of effort for minimal gain in most situations. Wasn’t often worth it in 2020. Cannot imagine it being more worth it here.

Launcher targets independently from drive (turret)

Likely is unnecessary complexity in a game where we aren’t shooting from every position and most of the cycle is spent in transit, acquiring Cargo or scoring near to the goal. The speed of lineup is likely moot for turning the robot vs turret here.

Shoot into Lower Hub from Launchpad


Feed in auto

We don’t expect this to be a needed situation enough to compromise any other designs to allow for it.

Keep away opposing Cargo

(Intaking opposing Cargo and having a premade system to quickly send it far away to make the opponent’s cycle longer.) Similar to above, we don’t expect to use this enough to justify compromising any other subsystems. This doesn’t mean we may not do it manually, with our launcher, this just means it won’t be automated.

Score upper hub from anywhere

(This refers to anywhere not within the tarmac or launchpad) We view this as being way too complicated when in almost all launching years, most people will end up defaulting to a few preselected positions anyways. Also shooting from that range seems minimally useful because you have to reenter the chaos of the tarmac to get more Cargo.

*= likely to revisit later on after some prototyping


Justification for many of these can be found in the above drop downs.

  • Lower Hub is super strong in qualifications and elims. In tight quals matches and eliminations however we believe a reliable Upper Hub scorer will be stronger.
  • Scoring in either Hub is likely easiest against the fender.
  • Scoring in either Hub from the tarmac provides substantial flexibility in being able to score from behind defenders/scoring opponents/allied robots.
  • Being able to climb the Mid Rung doesn’t exempt you from having to be able to climb the Low Rung.
  • Being able to actively settle Cargo is extremely important, especially for those who plan to score in the Upper Hub.
  • Being able to go under the Low Rung makes aligning for a Mid Rung climb much easier.
  • Intaking and Scoring from opposite sides of the robot provides substantial gains in auto, however in tele-op, the gains are marginal in most cases.
  • The Terminal is not especially useful.

These conclusions and this priority list are going to change. As we develop our own prototypes alongside other Open Alliance teams, we will all but certainly move around items that were easier or harder than expected. As we further look at strategy we will also likely feel that some items on here are overrated or underrated in terms of point value. Sticking with a stationary priority list for too long is an effective way to build the wrong robot.

Moving into Week 1

We plan to hit the floor running in Week 1 with prototyping of primarily intakes, launchers, indexing methods, and climbing to the High/Traverse Rungs. We figure that climbing to the Mid Rung is similar enough to 2020, that we can hold off for a moment on prototyping that specifically for a bit. In order to start this process we broke out into groups and discussed factors and questions that would make an effective subsystem for this game. We also took a look at some past examples of these subsystems and thought about how they could be used going forward. Some notes on this discussion can be found below.



General questions/comments to consider when prototyping:

  • What’s a good motor and a good ratio?

  • How well does the ball center?

    • Vectored intake wheels / mecanum wheels
  • Should the intake be more rigid or more compliant?

    • material selection (plastics or aluminum), how its constructed
    • 1678 2014 - good compliant example
  • What type of wheels work best?

    • Soft wheels, harder wheels (colsons), treaded wheels

Some ideas/possibilities:

  • 2019 style - one set of rollers on the top

    • 6328 2019 offseason robot intake
    • Passive bar on the bottom
    • Compliant/sticky wheels
    • Zip Tie/surgical tubing stuff
    • Bumper cutout(?)
    • Pneumatic actuation
  • Ways to incorporate velcro?

  • Grabber-style intake

    • Goes over/around cargo
    • Hungry Hungry Hippo
  • Rubber polycord style intake

    • “CD7” intake
    • 4911 2020 at the beginning of the year
    • 5172 2020
    • 1678 2016
  • 2018-style intake with rollers on the side(?)



General questions/comments to consider when prototyping:

  • Does the launching mechanism work with the upper hub, the lower hub, or both?
  • How much does ball wear affect your specific mechanism’s shot?
  • Think of good camera placement
  • Mechanism robustness

Some ideas/possibilities:

  • Catapult:
    • How to hold 2 cargo?
    • Shoot 1 ball at a time(?)
    • Release angle
    • Catapult arm length
    • Pivot position length
    • Velocity of catapult
    • Pros: more consistently reliable/accurate
    • Cons: shoots slower than a flywheel
  • Flywheel:
    • Could have variable hood
    • Release angle
    • What amount of compression works best?
    • Which type of wheel works best?
      • Diameter (4”, 6”)
      • Soft/squishy, harder wheels
    • What hood material works best?
      • Foam, polycarbonate sheet, 3d printed part
    • Two wheeled flywheel or one wheeled flywheel?
    • Speed of flywheel
    • Inertia
      • Is the energy loss in-between shots large enough to warrant additional inertia wheels?
  • Linear-style shooter
    • Spring-loaded; belt powered/launched
    • 2014 teams
      • 118 2014



General questions/comments to consider when prototyping:

  • How prone are the cargo to jamming?
  • Should we “hold” the cargo somewhere and if so, where and how?

Some ideas/possibilities:

  • Hotdog rollers
    • 254 2017
  • Straight ballpath
    • Simple
    • Possibility of jamming
  • Looking back at our 2020 v-hopper
    • No centering required
      • Quicker pickup
    • 4414’s second iteration of hopper 2021

As always, any questions, comments, criticism, or suggestions are highly appreciated. We’ll have some prototyping content for y’all later this week.


d6ntx3o-11cbb3a9-770a-4842-9034-e36926cdd3b8 (1)


Day 3: Prototyping Begins

Following on from our priority list the team started working on prototyping, setting up the software framework for 2022 on an old robot, and strategy came up with their game plan for the season.


Using our priorities determined on kickoff weekend we decided on 3 main areas to prototype. Once we learn from these they will get more detailed and be integrated together in a more detailed prototype. This will include adding the indexing/hopper system in the next more detailed step.


Having built a climber before and it being a fairly known challenge, the team is focused on prototyping for the high and traversal rungs. To start this is mostly an exercise in 2D sketching with the plan to move to some 3D printed 3:40 scale models to play with ideas in the physical world (thanks to @cadandcookies for the idea here). These are very early stages so expect more details here in the coming days.


There wasn’t much work done on the shooter yesterday as we were focused on other mechanisms at the moment. There is some initial CAD work to be done to create some adjustable plates to allow us to experiment with compression and release angle, and then we should start physically testing. The three leading ideas at the moment are:

  • Hooded flywheel
  • Linear flywheel
  • Pneumatic catapult

Again, many more details to come here as we get farther along.


The intake is where most of the focus was yesterday. The initial prototype used HYPE blocks and connectors mounted to an old kitbot to simulate driving around to collect balls. Without knowing about our indexing system, the focus was on how to handle bouncing balls and not anything about centering the balls. To accomplish this our initial idea was to add a second roller higher than the ball height to only help combat the bounce.

Happy Path

The bounce is timed perfectly such that when the ball is rising it hits the upper roller and gets sucked in.


Unhappy Path

  • The ball bounces right over the intake and into the robot. This might actually be a feature if the robot can handle indexing and has a way to reject opponent balls when wanted (umbrella :thinking:).

  • The ball bounces and hits the upper roller, knocking the ball away from the robot. The velcro on the intake didn’t stick enough to hold on to the ball.


  • Leaning into having bouncing balls go over the intake may be an effective strategy. The approach to the ball is the same as if trying to suck a ball in, so catching it may save the development work of settling with the intake itself if the indexing system can already handle this. This opens up the possibility of catching an opponent or a 3rd ball, so the drivers would need to be aware of this and the robot would need to have a configuration to reject catching.
  • The top roller may not even need to be a roller. There is so little contact time as the ball is never compressed against anything, simply having a wedge may be just as effective.
  • Compliance in the top roller/wedge would be beneficial to dampen the impact of the ball.
  • Velcro has some promise, but didn’t work great in the current implementation.

Next Steps

  • Experiment with the top roller not being a roller at all and simply a wedge.
  • Add compliance to whatever the top deflection system is to absorb some of the bounce.
  • Try without any top system to see if it is simply knocking away more balls than it is settling as the large bounces would bounce over a single roller where some hit and fly away from the top deflector.


The software team made huge progress on both the OI (operator interface) and drivetrain code. This is the foundational system that will allow the team to connect controllers and run the drivetrain on any robots we test with (plus eventually the competition robot). They also tried out the new Xbox controllers and upgraded the Driver Station software. The next steps will be setting up the odometry and motion profiling systems in preparation for auto routines, and starting to explore vision options as soon as we have a target to test with.


The scouting/strategy subteam worked on planning out the required work for the season and creating a schedule. Here is the high level task breakdown:

  • Understanding the game
    • This includes trying to predict the mechanics of ball availability on the field, the tipping point for shooting high vs low, need to drive under the low rung, and much more.
    • Will have a more in depth update on this soon
  • Determine what metrics the team wants to scout for
  • Develop the layout of the scouting app for this year
  • Setup the data analytics for the outputs of the app
  • Come up with a playbook for the game to use at events
  • Training all our scouts in using the app

Field Build

We mocked up an initial high hub to allow for prototype testing until we can get a more realistic one built. This is built out of the rocket from 2018, the climbing bar from 2020, and some plastic found in a dumpster near our shop.

Future Prototype Teaser

Mysterious screenshot taken from the strategy zoom for some of our remote members showing some future prototype ideas…





Have you considered replacing the velcro with brushes?

A number of teams used brushes to great effect in 2020 including 3476:


Yep, we’ve seen some people throwing that idea around and we definitely think its worth a try if we can acquire some brushes in the near future - looks like we forgot to mention that in the “next steps” section of that blog post.

We’re curious as to how effective the brushes will actually be at dampening/deadening the bounce of the ball as they interact with it, which seems like the type of behavior that you’d want. When/if we do test any brushes, we’ll be sure to include our results in the blog.

@saikiranra y’all got any brushes we could borrow?? :stuck_out_tongue_winking_eye:


I believe teams used something like these brushes off of McM.


Day 4: Linear Catapult Prototype

Small update today on a first shooter prototype, a linear pneumatic catapult.

It is made up with a linear motion 3D printed HYPE block with 8 bearings in it and a pair of 3/4" bore 10" stroke cylinders both powered by their own high flow solenoids from automation direct. There are also some other optimizations like having two tanks down stream from the regulator.

The cradle to hold the ball needs a lot of work as it is currently rotating and flexing with every shot, but it is impressive to still be consistent even with that. The eventual goal is to have this hold both balls at a time, side by side.



We used brushes from the 2019 feeder station in our 2021 at home robot and they worked great. 10/10 would recommend.


This would be really interesting to see. The main argument I’ve seen against catapult’s this year is that you can hold two cargo at once. A double shot catapult would, in theory at least, be even better than a fly wheel shooter in that regard. Good luck!

1 Like

Day 5-6: Launcher Prototypes, Scouting App, & Drivetrain CAD

Scouting App

The scouting team started work on the scouting app for this year. One of the ideas they came up with was recording the exact location a team shot from by tapping on the field. This requires having an accurate field modeled in the app UI. The scouting team will do a much larger deep dive into this soon.

We figured out a way to streamline the process of figuring out where each field element should go on CanvasManager [our name for the scouting app UI framework]. To do this, we first uploaded a drawing of the field into GIMP. We resize it to the resolution of the app we are making so that when we hover the mouse over the image, we see the exact x and y coordinate of the mouse. After accounting for some offsets and other calculations, we transfer these coordinates into a website called GeoGebra(similar to Desmos). We can then rotate the shape we created on GeoGebra to find the other necessary points. Although there was some initial grunt-work, this process should streamline the entire process, giving us more time to work on debugging the app.



The software team continued to bring up the baseline code for controlling the drivetrain and running motion profiles, in addition to working to finish up the prototype motor test board. There will be a much larger software update posted soon so this section is kept short for now.


Linear Catapult

The linear catapult was modified to hold two balls side by side. With this added weight the prototype struggled with range in its current form. I unfortunately don’t have any pictures/videos of this right now, so stay tuned for those over the weekend. For the next catapult revision we think we will be pivoting to a more standard rotating catapult design, but staying pneumatic.

Hooded Flywheel

This is a traditional hooded flywheel prototype with various built in adjustments. The CAD team worked on designing it and then we cut it out on our router.

Attempt 1

Maybe a little fast… turning down the speed for attempt 2

First Second try!

This was at the very end of the meeting, so we only shot these two balls. Much more testing to come…

Prototype Specs

  • Adjustable compressing by changing hood holes and thickness (currently 0.8" of compression)
  • 6" flywheel (currently 2x colsons)
  • 2x NEOs either 1:1 or 1.33:1 (currently 1 NEO 1:1)
  • Additional top flywheel attachment powered by another NEO either 1:1 or 1.33:1, similar to the GreyT Shooter V2 (not shown or tested yet)
  • 2" Kicker wheel powered by an UltraPlanetary (not used yet)
  • Mounting holes for a “ball tower” to store and feed the balls

Next Steps

  • Prototype and integrate ball tunnel into the shooter to feed balls
  • Test how rapidly two balls can be fed
  • Test the range where we can still make shots with a fixed hood angle
  • Experiment with
    • Wheel type
    • Compression
    • Hood material, potentially foam
    • Release angle
    • Upper flywheel
    • Flywheel speed(s)


The CAD team also worked on our drivetrain CAD which is shown below. There are still some final details, but it is almost there. Currently it is 29W x 30L and has an 11:66 gear ratio.

We’re off today, so we should have a week 1 summary post with our progress so far and from tomorrows meeting up on Sunday.



gif game is 100%


Software Update #1: Setting the Foundation

Our robot code work this week was focused on setting up a solid foundation for our 2022 code base. Below are a few key takeaways; while none of this is particularly revolutionary, we hope that it might prove useful.

AdvantageKit, Logging, and Multi-Robot Support

This is our first full project to make use of AdvantageKit and our logging framework. This means that hardware interaction in each subsystem is separated into an “IO layer” (see more details here) such that data can be replayed in a simulator. An extra bonus of this system is that it’s easy to swap out the hardware implementation. Currently, the drive subsystem supports SparkMAXs, TalonSRXs, or the WPILib drive sim depending on the current robot. We can test our code on older robots just by changing a constant, which has proved useful as we develop code long before any competition robot takes shape. Each subsystem and command defines constants for each robot, meaning that we don’t rely on all of them to behave identically. We’ll continue to share our progress with logging as the robot project grows beyond just a single subsystem.

Operator Interface

During the offseason, we built a new operator board (see below) focused on using Xbox controllers. It contains wells for the driver and operator controllers, plus a series of physical override switches. The panel under the laptop is magnetically attached, and lifts up for access to the cables. This makes it much easier to carry devices to the field, and means that we don’t need to rely on dashboard buttons for critical overrides.

Last year, our scheme for finding and using joysticks was extremely flexible (see this post). However, we felt that the capability just wasn’t worth the complexity. Rather than supporting 7+ control schemes, we’ve reduced down to just 2. During competitions, the driver and operator use separate Xbox controllers. For testing or demos, all of the controls can also be mapped to a single Xbox controller. Here’s how the code is set up internally:

  • All of the driver and operator controls are defined in this interface (it’s very minimal right now). Based on the number of Xbox controllers, either the single controller or dual controller implementations are instantiated.

  • This class handles the override switches on the operator board. It reads the value of each switch when the joystick is attached, but will default to every switch being off if we’re running without the board.

  • The OISelector class scans the connected joysticks to determine which versions of the OI classes to instantiate. When the robot is disabled, the code continuously scans for joystick changes and recreates the OI objects when necessary.

This command is responsible for converting joystick positions to drive speeds. The joystick mode is selectable on the dashboard, and we currently support 3 modes (again, simplifying from 10 modes last year).

  • Tank: Left stick controls left speed, right stick controls right speed. While not a very intuitive system, this is still useful when testing the drive train.
  • Split Arcade: Left stick controls forward speed, right stick controls the speed differential. This is a reliable and easy to use option for testing, demos, etc.
  • Curvature: This isn’t quite a traditional curvature drive, but a mix between full curvature and split arcade. It’s a scheme we used during the at-home challenges last year to overcome the key limitation of curvature drive: not functioning while stationary. While a “quick turn” button is an effective solution, we felt that it was too unintuitive. Instead, we slowly transition between split arcade and curvature drive up to 15% speed (0% is split arcade, 15% is curvature, and 7.5% averages the outputs from both modes). This provides the benefits of curvature drive at high speed while maintaining the ease of use from split arcade.

Odometry, Motion Profiling, and Field Constants

Our current odometry system is fairly basic right now, but will evolve as we continue to explore vision options. Odometry is handled by the drive subsystem, with getters and setters for pose. One key change from last year is that everything uses meters instead of inches. This has already made our lives much easier when it comes to setting up motion profiling.

As with other areas, we took this as an opportunity to simplify older code. Our ~800 line motion profiling command from last year is now much simpler and easier to use (see here). We focused on the most essential features while removing the less useful ones (like circular paths, which were only useful for the at-home challenges). This new command is also much more maintainable when we need to make fixes and improvements.

Before long, we’ll need to start putting together profiles for auto routines. Unfortunately, this year’s field layout is a bit of a mess when it comes to defining the positions of game elements (did the edges of tarmac really need to be nominally tilted by 1.5°?) To save many headaches later in the season, we wrote this class with lots of useful constants. It defines four “reference points” (see the diagram below) along the tarmac. The cargo positions are defined using translations from those references. The same principle of starting at a reference and translating could be used to define robot starting positions or waypoints on a profile to collect cargo. The class also includes constants for the hub vision target.


SysId is an incredibly useful tool, but the process of setting up a new project is quite tedious (selecting motors, encoders, conversion factors, etc.) Instead, we wrote a command that communicates with SysId but makes use of the code we’ve already set up to control each subsystem. See this example of using the command. Each subsystem just needs a method to set the voltage output (with no other processing except voltage compensation), and a method to return encoder data like position and velocity.


Neither of these utilities are new this year, but it’s always worth mentioning them again:

  • We use a custom Shuffleboard plugin called NetworkAlerts for displaying persistent alerts. The robot side class can be used to alert the drivers of potential problems.

  • For constants (like PID gains) that require tuning, we use our TunableNumber class. During normal operation, each acts as a constant. When the robot is in “tuning mode” (enabled via a global constant), all of the TunableNumbers are published to NT and can be changed on the fly.

Our next major software project is to explore vision options with the hub target (and maybe cargo too). We’ll be sure to post an update with our findings. In the meantime, we’re happy to answer any questions.



Scouting and Strategy Week 1 Update

This past week, the Scouting and Strategy Team worked to further analyze the game as well as begin work on our scouting app for this year. Below you can find a summary of what we have accomplished, what we have found useful, and what we haven’t.


Throughout the week, we looked at ways to gain further insight into the game as well as possible adjustments to our priority list. Early on, we looked at the Monte Carlo simulation made by Team 4926, which can be found here. When using it, we found that we were unable to extrapolate meaningful information from it when the distribution of performance was flat, as opposed to on a curve. We are looking to modify this in the near future, both so we can project how the game will play out with it, and also so we can generate mock data for our analysis systems with it.
We have also been looking at ways to predict the amount of cargo on the field as a match progresses with robots of certain capabilities. We attempted this using a sort of excel algorithm, however we found this to not be able to take in enough data to give useful results. We think this sort of analysis has potential and we are looking at python solutions.

We have also discussed and listed below our adjustments to our initial priority list. If you click on any of the changes, you can see our reasoning.

Being able to go under the low bar: Should Have -> Must Have
  • As more and more Open Alliance and Ri3D information comes out regarding the cramped nature of the Hangar, we believe it will be very important to be able to approach the hangar rungs from the middle of the field as opposed to having 2-3 robots enter from the side, turn 90 degrees and awkwardly slide into place.

  • We expect cargo to make its way over into the hangars and it will be greatly beneficial to be able to enter from either side of the hangar.

  • We expect defense to be strong against robots who are taller than the low rung who enter a hangar and get trapped in an artificial chokepoint with a defender between the hangar trusses.

  • It’s only a few inches of height sacrifice

Active Cargo Settling: Should Have -> Could Have
  • From our own early prototyping as well as other Open Alliance teams, we have found intakes that settle balls that are bouncing more than a couple inches off the ground to be challenging to say the least. We feel currently it may be better to focus on having an excellent ball-on-ground intake.

  • We don’t expect bouncing cargo to be as much of a plaguing issue as we did on kickoff weekend. Pre-Champs, we expect to able to thrive without this capability. We may assess pursuing this later in the season depending on how early events look.

Score cargo in lower hub from tarmac: Rephrased to “Score cargo from one robots length away from fender”
  • This was more or less the intended understanding from kickoff, just a clarification.

  • Goal is to be able to “score over defender,” similar to 254 2014.

  • Further back on the tarmac seems unnecessary and unreliable/unrealistic.

Score cargo from one robots length away from fender: Must Have -> Should Have

Having more than one position of scoring low may hinder other aspects of the design and we don’t believe it is absolutely necessary to have a good low scoring robot.

High Bar: Could Have -> Should Have
  • From early prototypes from other Open Alliance teams, we expect this challenge to be slightly easier than some of us originally foresaw.

  • We expect a large portion of teams to have a mid climb and a high climb + a mid climb = bonus RP. In order to retain control over our destiny as much as possible in the rankings, we would like to not require 3 robots to climb for the rank point.

Umbrella: Won’t Have -> Could Have

Umbrella refers to a mechanism that can selectively prevent cargo from entering an open hopper. If the goal is to allow bouncing Cargo to fall into the robot, we would potentially want to be able to prevent undesired Cargo from doing the same.

Operation Double Trouble: Unlisted -> Could Have


Wearing a Mask: Must Have -> Must Have

giphy (6)

5 Cargo Auto: Unlisted -> Unlisted


These have likely settled into place for the most part as we are quickly approaching the main archetype discussions of the robot. We will have a post prior to our Week 1 event to share our strategies going into it.


We have been working hard to get our scouting app off the ground quickly as we share a large amount of student resources with the programming subteam. To start off, we compiled a list of data points that we want to be able to collect both in match data as well as pit scouting data. This can be found below.

Match Data
  • Alliance color
  • Start position(tap for location)
  • Taxi
  • Ratings(intake, driver, defense, avoid defense)
  • Shooting position(tap for location)
  • Upper success/failure
  • Lower success/failure
  • Climb level(attempted, success)
  • Climb time(for each level) (collected in background)
  • Penalties(#)
Pit Data
  • Team #
  • Picture
  • Climb level
  • Height
  • Dimensions
  • Multiple drive teams?
  • Shoot high?low?both?
  • Auto mode
  • Start preference
  • Shoot from fender?tarmac?launchpad?
  • Drivetrain type
  • Holding capacity

One of the more unique aspects of our data collection is that we are taking in the “precise” location that robots scored from, as well as their accuracy from that position. We are doing this by having scouts select the field image on their tablet screen on the place where they believe they scored from. Then a pop up comes up near where they pressed, prompting them to select how many were scored and missed by hub height. We are, in the background, collecting the timestamp data of each scoring action, as well as climber timings. We are unsure as to how we will use it or how accurate it will be, but we are moving with the principle of collecting all the data we have access to that doesn’t impede the collection of other data, and selecting what we want to use later down the line.

We have spent more time than I think any of us are a fan of figuring out the most effective ways to display the field in the app.

Getting the dimensions of field elements and scaling them into our app was annoying.

We want this degree of precision so when we collect scoring location x,y data, we can overlay it onto an image of a field and have it be accurate-ish.

Pit Scouting App portion of app in browser and on the kindle.

App Field Status as of Saturday. You can probably see the slight imperfections in the Hub deflectors and tarmac lines. Trust me, it bothers us as much as it bothers you.

We have started looking at some ways we can use the substantial amount of data we are collecting. My 3 personal favorite are below:

Density/Heat Map of scoring locations/accuracy by team/event/driver station

Since we have access to every team’s scoring location as well as their accuracy from that position and their driver station, we can model this to our heart’s content. Below is an example of a model that looks kinda like how we imagine this could look.

Accuracy of event or team over distance from the goal

This is certainly less useful for low goal scorers, but as more robots try to score from range, we can model their accuracy from range and potentially know where they are the most effective, and take appropriate in match actions.

New York Times Spiral Graph

If you want to take a look at what we have going on code-wise, our repo can be found here. The 2022 specific code can be found in the Ayush branch, found here.

As always, any questions, comments, criticism, or suggestions are highly appreciated.


200 (3)


Software Update #2: “One (vision) ring to rule them all”

Last year, we started our vision journey by auto-aiming just with the horizontal angle from our Limelight. However, we quickly realized the utility of calculating the robot’s position based on a vision target instead. By integrating that data with regular wheel odometry, we could auto-aim before the target was in sight, calculate the distance for the shot, and ensure our auto routines ended up at the correct locations (regardless of where the robot was placed on the field).

Our main objective over the past week was to create a similar system for the 2022 vision target around the hub. This meant both calculating the position of the robot relative to the target and smoothly integrating that position information with regular odometry.

While both the Limelight and PhotonVision now support target grouping, we wanted to fully utilize the data available to us by tracking each piece of tape around the ring individually (more on the benefits of that later). Currently, only PhotonVision supports tracking multiple targets simultaneously; for our testing, we installed PhotonVision on the Limelight for our 2020 robot.

The Pipeline

This video shows our full vision/odometry pipeline in action. Each step is explained in detail below.

  1. PhotonVision runs the HSV thresholding and finds contours for each piece of tape. Using the same camera mount as last year, we found that the target remains visible from ~4ft in front of the ring to the back of the field (we’ll adjust the exact mount for the competition robot of course). In most of that range, 4 pieces of tape are visible. We’ve never seen >5 tracked consistently, and less optimal spots will produce just 2-3. Currently, PhotonVision is running at 960x720; the pieces of tape can be very small on the edges, so every pixel helps. The robot code reads the corners of each contour, which are seen on the left of the video.
  2. Using the coordinates in the image along with the camera angle and target heights, the code calculates a top-down translation from the camera to each corner. This requires separating the top and bottom corners and calculating each set with the appropriate heights. These translations are plotted in the middle section of the video.
  3. Based on the known radius of the vision target, the code fits a circle to the calculated points. This is where we see the key benefit of plotting 12+ points rather than just 2 (as we did last year). When the robot is stationary, the position of the circle stays within a range of just 0.2-0.5 inches frame-to-frame. Last year, we could easily see a range of >3 inches. While the translations to each individual corner are still noisy, the circle fit is able to average all of that out and stay in almost exactly the same location. It’s also able to continue solving even when just two pieces of tape are visible on the side of the frame; so long as the corners fall somewhere along the circumference of the vision ring, the circle fit will be reasonably accurate.
  4. Using the camera-to-target translation, the current gyro rotation, and the offset from the center of the robot to the camera, the code calculates the full robot pose. This “pure vision” pose is visible as a translucent robot to the right of the video. Based on measurements of the real robot, this pose is usually within ~2 inches of the correct position. For our purposes, this is more than enough precision.
  5. Finally, the vision pose needs to be combined with regular wheel odometry. We started by utilizing the DifferentialDrivePoseEstimator class, which makes use of a Kalman filter. However, we found that adding a vision measurement usually took ~10ms, which was impractical during a 20ms loop cycle. Instead, we put together a simpler system; each frame, the current pose and vision pose are combined with a weighted average (~4% vision). This means that after one second of vision data, the pose is composed of 85% vision. It also uses the current angular velocity to adjust this gain — the data tends to be less reliable when the robot is moving. This system smoothly brings the current pose closer to the vision pose, making it practical for use with motion profiling. The final combined pose is shown as the solid robot to the right of the video.

Where applicable, these steps also handle latency compensation. New frames are detected using an NT listener, then timestamped (using the arrival time, latency provided by PhotonVision, and a constant offset to account for network delay). This timestamp is used to retrieve the correct gyro angle and run the averaging filter. Note that the visualizations shown here don’t take this latency into account, so the vision data appears to lag behind the final pose.

More Visualizations

This video shows the real robot location next to the calculated vision pose. The pose shown on the laptop is based on the camera and the gyro, but ignores the wheel encoders. This video was recorded with an older version of the vision algorithm, so the final result is extra noisy.

This was a test of using vision data during a motion profile - it stops quickly to intentionally offset the odometry. The first run was performed with the vision LEDs lit, meaning the robot can correct its pose and return to the starting location. The second run was performed with the LEDs off, meaning the robot couldn’t correct its pose and so it returned to the incorrect location.

This graph shows the x and y positions from pure vision along with the final integrated pose.

  • Vision X = purple
  • Vision Y = blue
  • Final X = yellow
  • Final Y = orange

The final pose moves smoothly but stays in sync with the vision pose. The averaging algorithm is also able to reject noisy data when the robot moves quickly (e.g. 26-28s).


This project has been a perfect opportunity to use our new logging framework.

For example, at one point during the week, I found that the code was only using one corner of each piece of tape and solving it four times (twice using the height of the bottom of the tape and twice using the top). I had grabbed a log file to record a video while I was running on code without uncommitted changes. On a whim, I switched to the original commit and replayed the code in a simulator with some breakpoints during the vision calculation. I noticed that the calculated translations looked odd, and was able to track down the issue. After fixing it locally, I could replay the same log and see that the new odometry positions were shifted by a few inches throughout the test.

Logging was also a useful resource when putting together the pipeline visualization in this post. We had a log file from a previous test, but it was a) based on an older version of the algorithm and b) didn’t log all of the necessary data (like the individual corner translations). However, we could replay the log with up-to-date code and record all of the extra data we needed. Advantage Scope then converted the data to a CSV file that Python could read and use to generate the video.

Code Links

  • NT listener to read PhotonVision data (here)
  • Calculation of camera-to-target translation (here)
  • Circle fitting class (here)
  • Method to combine vision and wheel odometry (here)

As always, we’re happy to answer any questions.


where’s our gif