How your team goes about generating pick lists for Alliance Selection

1293 finally needed a pick list meeting this year, and while it didn’t yield banners (out in quarters at Palmetto and Smoky Mountains, both times as the captain of the lower seed) it was fundamentally sound.

First thing we did was scouting. Our spreadsheet calculated each individual robot’s contributions in a match, which in hindsight could’ve also stood to have an unweighted cycles-per-match column. (We could see the cargo and hatch breakdown and calculate it, that just meant math.)

If you weren’t putting up 6.00 points per match (that is, crossing the auto line and getting back on Level 1), you were at the very bottom of our list. No discussion, no thought, just the bottom.

If you were on our drive team’s Does Not Play Well With Others list, you were probably just ahead of that first group. I think we sussed out one team to be a little higher at one event, but still lower than they would be on raw performance.

Then we started to discuss our main needs and where teams fit in. We were good on the cargo ship, reasonably ambidextrous between hatches and cargo, and could cargo the rocket. So our shopping list included higher-level Hab climbs and rocket scoring. Once we got the obvious top-tier out of the way, we’d sort out that middle tier we were also occupying. That often meant shifting down robots that were in the category of “quality scorer, but playing the same game as us”. With how we shifted to defense at Smoky, we probably shouldn’t have put as much of a penalty on that attribute.

From here, it’s a lot of sorting out based on the spreadsheet, and a lot of debating whether one team is better than the other. And sometimes, it really comes down to making it an A-or-B conversation, then repeating. It’s tedious, but necessary to ensure someone isn’t missed. Eventually, we would get worn out from staring at it and paste the remaining lower-mid-tier in points contribution order. (Not ideal, but hey, honesty.)

Every single team wound up on our list, even if at the bottom, to ensure we didn’t miss somebody.


This season I was the primary person creating pick lists. I’m not a huge stats person so I mainly make judgements based off of performance. However, that only really works for the first 8-10 teams. So as I went farther down the list I used scouting data to determine what teams in what order go on the list. If I get stuck, or have my preliminary list complete, I talk with the major scouts on our team and our scouting mentor about the list and create a final list. Of course this isn’t the only way to do it, but this is how we do it.


With varying levels of success, here is my pick-list strategy:
Before the Event:

  1. Get good at scouting

The night before picking

  1. Don’t talk about individual robots/teams
  2. Identify what would be a solid elims strategy, if the two best robots teamed up, how could we beat them? Does that strategy apply to less-powerful foe?
  3. Identify what we can do in that strategy and what traits we need to fulfill our needs
  4. Pick out the traits that would make a perfect first pick and rank them from most important to least important. Repeat with the 2nd pick.
  5. Write down every team and relevant stats on an index card
  6. Start sorting teams based on ranked traits
  7. Remove teams that are clearly not a good fit
  8. Some years (2017, 2018) the first and second pick are really just one list. Some years (2019), there are more specialized roles. Sort teams as appropriate.
  9. Write down the top 24 and any teams we want more information on.

The day of

  1. Scout the teams we want more information on.
  2. At least an hour before picking, re-convene to determine if the additional information has changed your rankings
  3. Write down our finalized list(s).
  4. Trust the list.

I go into more detail and give rational in this post.


We’ve got all our scouts inputting data into our scouting app, which feeds into a nicely sortable database.

This year, strategy in regionals was pretty easy. We knew that we needed a partner to put up big points, and we needed a partner to hold off the other alliance. Due to robot limitations on our end, we were looking for partners with level 2 starts and level 2 climbs.

We started by filtering out any teams that were sitting dead on the field, or scoring 1-2 on our “robot stability” and “driver skill” metrics. We split the remaining teams into teams that have level 2 starts + climbs, and teams that don’t. From there, we sorted by average game pieces per match, and demonstrated defense skill.

We make sure that we have enough “first picks” chosen so that even if other teams are picking directly from the same list as us, we still have a good partner ready. Second picks are similar, but a little harder to scout for because we need a longer list. We also make a list of teams to take a second look at, typically if we think they have an un-demonstrated capability or if their performance went up or down sharply over the first day.

Edit: Watching match videos is critical here. Nobody makes it onto a pick list without a room full of people watching a video of one of their matches.

1 Like

Our team’s process is very similar to what @Katie_UPS mentioned in her post. Do that and you’re already ahead of so many other teams that “scout”.

It was mentioned if it’s worth scouting even if you aren’t going to be in a picking position. I would say that it is. Two reason. Number one, scouting data is important throughout the event, not just for picking. Number two, even if you aren’t going to be in a picking position at your current event. You may very well be in a picking position at a future event and practice makes perfect. Going through the motions is important, especially if you want to improve your process.

To build on reason number two: If our team is pretty sure we won’t be in a picking position, we will still create a pick list and theory craft some of the alliance selection based on our data. That way we can corroborate our list with the selection results to see how well we did based on our data. That affords us the opportunity to self correct and tweak our scouting/picking approach and process for future events/seasons.

This is actually something I would suggest all teams do regardless of their picking situation. Especially if there was some oddity that stood out from your list during the selection process. You can learn a lot by reviewing things after the fact. (One word of caution, as with most data of this kind, it’s not always 100% and it won’t tell you the whole story. Especially when you get into the qualitative stuff. For example: Friendships & Feuds impacting a teams decision to pick one another.)

Additionally, If you’ve had students scouting all weekend, you’ve got to do something with that data. If you don’t use the data, scouting just becomes busy work. Which could negatively impact the culture around scouting on your team.


We begin with an alliance strategy as part of kick-off discussions. It helps us focus on game play and our selected roll through out build season and practice. It also helps you create scouting data points important to your alliance plan. The alliance strategy may need adjustment after we see actual game play and competition depth.

We get opinions and input from anyone on the team that wants to contribute to the discussion. But the final list is usually generated by key scout team members with input from the drive team. Too many people involved in sorting the list usually slows down the process.

We always create a full pick list. Never assume that teams you believe will perform well will be picked.


1 Like

On 1678, our scouting system provides a massive amount of data on each robot at every competition we go to, which allows for an effective picklist meeting. Our scouting whitepapers can be found here (the 2019 whitepaper is a WIP and will be released soon). Below is an in-depth explanation of our picklist process.

Here is a list of things that happen before our picklist meeting:

  • Scouting.
  • Game-specific robot evaluation in the pits (this season, we tested how good robots were at driving onto forks).
  • Throughout the first day of quals, the strategists and a couple of mentors compile a short list of likely-DNPs based on observations from qual matches and robot data collected in the pits. DNP criteria obviously varies from year to year as the game changes; this season, a big factor was the type of drivetrain robots had, which affected both their ability to play defense and drive onto our ramp for our triple climb.
  • Scouting data is imported into a spreadsheet optimized for viewing and sorting through data.
  • Robots are pre-ordered based on a first and second pick ability metric, calculated by plugging scouting data into a pre-determined equation.

Our super scouts, strategists, pit strategist, and mentors attend the picklist meeting on the night before elim matches. With this setup, we then begin the meeting and execute the following steps:

  1. We start by reviewing each robot on the likely-DNP list. A justification for DNP’ing them is given, and if no one objects, they are officially DNP’ed. However, if at least one person objects with a valid argument, the robot is not DNP’ed.

  2. Next is a round of “Hot or Not” (also known as “Robot Tinder”), where we go down the list and compare each robot with the robot above it. If we think the robot is performing worse, we leave it where it is. If we think it’s performing better, we move it up the list accordingly.

  3. By this point, the first pick list is pretty clear and is usually only 2-3 robots long for regionals, and may stretch longer for Houston champs. After “Hot or Not”, the first pick list is finalized.

  4. To sort through second pick robots, we first identify which robots stood out to strategists and mentors during matches in their notes. We watch match videos of each robot and adjust their position on the list accordingly. This is tedious and usually takes a couple hours to do, but we believe it’s fully necessary to make sure we don’t miss anything.

  5. With our picklist drafted, we’re ready to enter the second day of quals (for regionals). We also identify robots to pay close attention to, especially when they haven’t performed a certain task that we want to see (such as starting off HAB level 2).

On the second day of quals and before alliance selection, the strategists watch the robots on the second pick list and reorder them throughout matches based on their performance the second day. First pick robots may also be reordered here, but it’s rare. More robots may also be DNP’ed if they are significantly under-performing or are having technical problems. Towards the last few quals matches, we also approach our planned alliance partner to share pick lists and make any necessary changes. After the last quals match, the finalized pick list is sent to the pit strategist. The list is usually 24 robots long for regionals (including ourselves) and 32 robots for champs, to ensure that we are prepared for every scenario.

I apologize for the lengthy post, but feel free to ask me any questions you might have!

  1. Bucket teams into High/ Medium/ Low/ No/ Game-Specific Categories as a team
  2. Small groups stack rank each category
  3. Lead scout(s) make adjustments up until picking time
  4. Whiteboard or text choices to field rep

Can you shed any light on what that equation looks like, or how it’s determined? Learning how best to process the data we have is always a challenge.

I also like Robot Tinder. It sounds like a more organized version of what we already do.

Edit: note to teams: Processes like this are why it’s not helpful to go up to a high performing team and say “hey, I think you should pick us.” Team testimony isn’t a category they sort by. If you want to sell yourselves as a 2nd or 3rd to a top team, give them something more like “check out our ability to fill the cargo ship under defense in Q95” or “look how partner-compatible our climb is in match 57”. Give the scouts something concrete to chew on.


Look at the Scouting System Development videos and slides on this page for our team’s approach. It is constantly evolving and too long to discuss here.

For sure! Our equations were determined (and modified throughout the season) based on what we want in a first or second pick. For example, our first pick ability equation incorporated scoring and climbing data heavily, as we wanted a robot who could score efficiently with a reliable climb. Our second pick equation incorporated defense and driving data heavily, because our playoff match strategy relied on the third robot on our alliance as a defender. These numbers are multiplied by weight constants that can be changed at anytime.


Hanson (hli) above has done the dirty work of laying our process in detail.

I believe the ranking equations for each year are contained in the white papers. They are always modified as the season goes on and we learn more about the game. In addition, because the field depth is so much greater at Champs, we change our criteria between Regionals and Champs. This year, we had even has separate lists for our 2nd and 3rd picks, but just used our 2nd pick list for our 3rd pick because 1939 was so high on our overall list–we didn’t think they would be available at that point.


In previous years 4607 has taken our paper scouting data, stayed up super late on Friday night compiling it into a spreadsheet, and then angrily discarded all of it after realizing our scouts were watching matches from some parallel universe.

Then we frantically called up 5690/2526/our other friends with great scouting data and proceeded to pick good enough alliances to beat whatever team provided us with the scouting data in the quarterfinals. It’s literally happened 4 times… so now nobody shares their scouting data with us and we had to figure it out for ourselves :slight_smile:

Luckily @hutchMN joined us this year and overhauled our scouting system. We have a solid scouting system through Google sheets with formulas that generate an auto-pick list which we compare to a pick list that we generate manually after reviewing footage of teams in a direct comparison. Pick-list meetings have been cut down significantly which leaves the team in a better spot going into the final day of our events. Matt can explain our system in more detail!


Here is the process 2791 used this year:

  1. Discuss our goals for the event. Every picklisting meeting, I ask what the primary goal for our alliance is, and usually get some silly answers like “score lots of game pieces” or “win the scale” or “hit four rotors”. We don’t move on until someone says “to win”. I do this little exercise to make sure we’re all on the same page about our goals here and don’t get too tunnel-visioned on any one thing.
  2. Look at our own data about ourselves: discuss our own strengths, weaknesses, etc
  3. Survey the scene: look at rankings, our matches tomorrow, data for major threats and other notable teams, and get a big picture understanding of the event
  4. Discuss viable strategies for our alliance. Using the analysis we did on ourself and the rest of the field, we try to choose a strategy that we think 1) we would be able to execute well and 2) we think would win. This year, this segment was mostly discussing “triple offense or defense?” and “split the field or use a side swap?”.
  5. Discuss desirable robot qualities/abilities for the kind of alliance we want to build, and list needs and wants for our first pick/captain and second pick.
  6. Rank teams for captain/first pick, or in general if the needs/wants of each are the same. We do this by:
    • Sorting by some relevant property in our data (in this case, average game pieces per match),
    • Taking the highest ranked team on this metric, discussing how well they fit our requirements, and they are the start of the list.
    • Then we take the next highest team, talk about how well they fit the requirements, and if they should be above or below the first team on the list.
    • Repeat this process, inserting teams into the list
    • Once we have enough teams here (how many is enough depends on what we think the worst case is for us during alliance selection), we may sort by a secondary metric to make sure we didn’t miss anyone important, especially at champs (this secondary metric was max game pieces per match this year).
  7. If the requirements for our second pick are significantly different from our captain/first pick, we will repeat the process for the second pick. This year, we split potential second picks into three buckets:
    • Plus: either played good defense or showed good offensive driver skill and has a suitable robot
    • Neutral: no information that shows they would be a good defender
    • Minus: we don’t think they’ll play great defense, but are reliable and meet the minimum requirements to play defense in playoffs. Most of these teams don’t make it to the top 23 teams we’ll actually consider picking, but we may watch them
  8. Questions. Throughout the whole process, we will brainstorm questions we want to ask teams the next day before alliance selection, usually concern breakages/lost comms/other robot failures. At the end of the night, we will go over this list and make sure we have everything we want to know recorded.

A couple weeks ago, I came up with a pneumonic for the four high-level things I like to look for in potential picks, CART:

  • Compatibility: how well do they complement us? We don’t just want the “best” robot on our alliance, we want the one that is most likely to help us win
  • Ability: what can they do? We want teams who can check off the needs/wants from our list
  • Reliability: how consistently can they peform? Consistency is generally good, but high variance picks can be key to pulling off longshot upsets
  • Trend: is their performance improving over time (good), decreasing over time (bad), or staying the same (good if they’re good, bad if they’re less good)?

If anyone has any questions about any of this, or the process of making a picklisting in general, I am happy to chat here or in DMs.


Use tablets to collect data about the attending teams, then discard it and drive into walls.


Was that a 14 year old joke? Oh, the nostalgia.

I got it from Jared Russell here.

And I got it from :slight_smile:


The question by the OP, like many on CD have a wide variety of answers and while quite varied still pertinent. Different methods can bring success to different groups.

However looking through the responses then looking at the success of those respondents may lead to a version of clarity. @Liu346, @Katie_UPS, @hli, @Richard_McCann are all involved with teams that enjoyed success in competition this season, and in many previous. They are likely onto something.

As for the RoboHawks, I trust our scout and strategy team and will always support the choices they make. Who would have guessed the #6 alliance selecting teams ranked 37, 44 and 16 (in that order) would have defied the odds?


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.