I have been working on a Picklist system for my team as of late as was confronted with the question; how do we quantify a team’s “Pickability”? Right now we are thinking of thinking how important auto, teleop, and endgame are to us and from there giving them percentage importance. from there take all of our scouting data and multiply scores in each of those sections by the respective percentage and add them together to get essentially their score higher being better. We went with this approach so we can essentially take feeling out of the equation for part of our decision. What do you all do? Is there something I’m missing? Thanks!!
I highly recommend reading Your scouts hate scouting and your data is bad: Here’s why. It’s one of the best scouting guides ever and lays out the general plan for designing a scouting strategy and collecting data.
There are LOTS of resources on this forum and elsewhere on the topic of scouting, alliance selection, strategy, and pick lists so I won’t go into great detail.
You’re trying to do two impossible things here, though:
- Weighing importance of functionality before you know the game (and scoring structure)
- Weighing importance of functionality before you know what YOUR robot does
Strategy for picking is highly dependent on the game, dynamics of the event, and most importantly what your team does. Find the complementary strategy to you – not the “best” team at the event. These may not be the same team.
Also, regarding this point—if you want quantitative data, look at OPR (Offensive Power Rating). OPR is calculated as the average points a team contributed to the alliance it played on and is the most effective quantitative measurement when you just want to put a list of teams at an event in some order of “probably best” to “probably not the best”.
However (and this is a big “however”), quantitative scouting is only one part of good scouting. I would not “take feeling out of the equation” if I were a scouting captain. There’s a lot more to the game every year than how many blomberiboos a team can score during auto or how high up they get their robot on the micketymack in endgame. A team’s driving style (“omg, team #### drives super aggressively” or “team #### has been pinning opponents a lot”) is also pretty important. Where they score points is important too—you don’t want to be running over your alliance partners. How their drive team acts can sometimes be a factor (“team #### has a mean drive coach who talks over alliance partners” is not something I’d ever want to hear at an event, but it does happen on occasion).
Tl;dr: pay attention to numbers, but there are quite a few other things to pay attention to as well. People have written lots of guides about all that, which you can find by searching old threads.
If you have actual scouting data, then OPR is sub-par. You should just calculate each team’s actual contribution, and use some aggregate value of that instead of OPR.
After seeing 1678’s pickability metric at the championship, I’m now a believer but there are some caveats. We added a pickability metric at IRI and plan to continue it next season.
I think the biggest value is it shortens the time to create the picklist. Instead of going through every team and putting them in tiers, the metric already has a rough sort so you can jump right in and work on the final sort of teams.
The biggest caveat, however, is I wouldn’t trust it to create a perfect picklist as even the best scouting systems will not be able to quantify everything and thus you will need to make adjustments based on more subjective aspects from your experienced scouts. The amount you adjust the list probably depends on how experienced your scouts are and how accurate you think your data is.
In terms of how to create the metric, I know 1678 does a more mathematical (and probably better) approach outlined in their whitepaper, but we did a more simplified method similar to what you explained. We first normalized the data by calculating z-scores, multiplied them by weights and then summed each of those products.
For IRI, our weights were:
11% Modified Auto Metric (metric to attempt to determine accuracy at 1 or 2 ball autos and ignored any 3+ ball auto performance)
11% max cargo scored
33% Average cargo
44% Modified Climbing metric (traversal climbing was extremely important to us and we saw that at the highest levels of play some of the top teams would let their partners so they could score more, resulting in lower climbing averages for top team and higher climbing averages for not as good teams. So we only looked at teams top 2/3rd of matches)
This is all pretty good. I was thinking for some games where maybe you need (or have more variation) in robot types, that you might have a more than one weighted list, and then see what the best robot of each type is. And if you want the alliance to have the different type in different picking positions, you would move between the weighted lists.
For RR, maybe you could weight a second type that included some defensive metric, if your strategy was more towards having a 3rd robot playing defense. Then you would switch more towards that list in the later pick. Since the weights are multiple variables, you can still pick a robot that has strong offensive capabilities as part of the weights of the ideal defensive robot.
There is some figuring what the best alliance make up will be, and that during selection it you also switch to the best available alliance, so how it looks might switch depending on the game and which robots are already selected.
Where you’re picking from also has a part to play in the “pickability” score. If you’re on the first, second or third seed alliance, and perhaps a reliable scorer yourself, you’ll want a partner who can guarantee you the wins you need. So you might judge teams by their worst matches, or their scoring “floor”.
If you’re on the sixth, seventh or eighth seed alliance, you have a big hill to climb in order to advance through playoffs. You need a team who can surprise everyone by scoring way more than expected. For that reason, you might judge teams by their rarest, best matches, or their scoring “ceiling”.
And also what the rest of your alliance does. That is, assuming you’re an alliance captain, who’s the best option for second pick will likely depend on who you end up with as your first pick.
That’s one reason that it’s so important for your team representative to have a good mind for strategy, rather than just giving someone a pick list and telling them to invite the highest remaining team on the list.
Pickability is completely different between districts, regionals, and Champs.
For example, Some super teams showed well at champs and weren’t picked. Why? Because that is a big differenc in first pick, second pick and third pick at champs.
First pick tends toward Total point contribution.
Second pick tends to look at the two you already have and fill gaps.
Third pick tends to look for the best total contribution of what’s left and ad reliability.
Also, your pickability will change from week 1 to week 7.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.