This is from my experience as the former president for my old team. Right after the game reveal, the first thing we did was go through the game manual together as a full team. The goal here was to make sure everyone understood the rules, the scoring, and what the game was really asking for. Once we felt like we had a good handle on that, we started talking about strategy—not what the robot would look like, but what it needed to do. We focused on things like how matches would play out, what the winning alliances might look like, and which tasks were the most valuable for scoring points. This helped us narrow down the key things our robot needed to do, like scoring certain game pieces or climbing in the endgame.
After that, I split the team into smaller groups, each focused on a specific mechanism for the robot—like intake, drivetrain, shooter, or elevator. Every group had a lead, usually someone with experience, who could guide the discussion and keep things moving. I’d float around between the groups, helping them work through their ideas and making sure everything fit together as a whole. For example, if the intake team came up with a design that took up a lot of space, I’d check in with the elevator team to make sure it wouldn’t cause problems for them. My job was basically to make sure the different groups weren’t designing things in silos and that everything would work together in the end.
Each group started by brainstorming ideas and sketching them out. Before they could start prototyping, I made sure they had a clear plan and some basic calculations to back up their ideas. For example, the drivetrain team would estimate speeds and turning capabilities, or the shooter team might figure out how much energy they’d need to launch game pieces a certain distance. We didn’t just jump straight into building; we wanted to make sure our ideas actually made sense before wasting time.
When it came to prototyping, the focus was on speed and simplicity. The groups would build quick mockups to test specific things, like whether an intake could grab game pieces from different angles or whether a launcher could hit a target consistently. These prototypes were super basic—usually made from wood, PVC, or scrap parts—but they gave us the info we needed. We didn’t spend weeks perfecting prototypes; the idea was to test quickly, learn what worked (and what didn’t), and move on.
One thing I wish we’d done differently was use tools like a Pugh matrix to help us compare ideas. Back then, we mostly decided what to build based on discussions and experience, but I think a matrix would’ve helped us make more objective decisions. For example, each group could list out their design options, rate them on things like reliability, ease of building, and scoring potential, and then see which option scored the highest overall. It’s something I’d definitely add if I could go back.
Another thing I’d change is how we collected data during prototyping. At the time, we’d just make observations and notes, but we didn’t formalize it much. If I could redo it, I’d push every group to track real numbers, like how many cycles a mechanism could do in two minutes or how often it failed. Graphing this data would’ve helped us see patterns and make better decisions. For example, if a shooter design was only accurate 50% of the time, that’d be a red flag to go back and improve it—or maybe scrap it entirely.
We also built basic versions of field elements as soon as possible. This let us test our prototypes in real game conditions. For example, the intake team could see how their design worked with obstacles or tight spaces, or the drivetrain team could test turning on field-like surfaces. Running mock cycles with these field elements helped us catch problems early and make adjustments before we were too deep into building.
One thing I always pushed for was reliability. I constantly asked, “Will this mechanism work every single match?” If a design was flashy but prone to breaking or hard to fix, it didn’t make the cut. We focused on making sure the robot could perform its core tasks consistently, because a reliable robot almost always outperforms a robot that’s amazing when it works but unreliable half the time. Once we felt solid about the core mechanisms, then we’d start thinking about extras or ways to push the robot’s performance further.
For a kickoff schedule, a good general format might look like this: start the day by watching the game reveal and immediately diving into the game manual to clarify rules, scoring, and field setup. Spend the late morning analyzing strategies and deciding what roles a competitive robot might play. After a lunch break, shift into brainstorming and splitting into mechanism-based groups to start sketching ideas and running initial calculations. The rest of the day should focus on planning what to prototype and setting priorities for the next session. I wish I could tell you how many hours to spend but I think the only real answer is whatever it takes .
By the end of kickoff weekend, every group had a direction and a plan for what to prototype. Looking back, I think we did a solid job, but if I could do it again, I’d add more structure with tools like the Pugh matrix and better data tracking during testing. Those things would’ve made our decisions more objective and taught newer members how to evaluate designs like engineers. Kickoff really sets the tone for the season, and being focused, efficient, and deliberate in those early days makes a huge difference down the line.