I didn’t intend to write an essay about this, but hopefully some teams find this useful.
To answer the original question:
- Collect data that you will use
- Collect only as much data as you can without losing accuracy
What data to collect:
Scouting data (generally) serves two purposes:
- Match strat
- Pick lists
Any data that isn’t applicable to one of those two should not be collected.
Any “fancy” data that stretches your scouts thin and potentially loses accuracy on key information should not be collected.
IMO, >90% of teams collect more data than is necessary. Follow the golden rule of scouting: scout within your means. 100% accuracy of one data field is better than 75% accuracy of five. Bad data is worse than useless, it can drive you to wrong conclusions, and to make bad decisions.
So what data is necessary and how do you decide what to cut out?
- For match strat:
- Prioritize data that will affect whether you go for the RP (climbs, total # balls, control panel)
- Then data that will affect whether you go for the win (inner/outer/lower goal)
- Then data that helps with cycle planning (shooting locations, times per shot, auto paths, time to climb, etc.)
- Then data that that takes into account the other alliance (how much are they affected by defense, how easily can they be blocked, etc.)
- Then any other subjective data (driver skill, etc.)
- For pick lists – first picks:
- Very similar to above
- If there are any particular things you cannot do (e.g. you can only climb when the bar isn’t already tilted), make sure you scout for robots that can do that
- For pick list – second picks:
- Usually there are a lot of teams with similar abilities
- Pit scouting data can be very useful here: I generally look for clean/easy to trace and debug wiring, solid bumpers, well tensioned chain/belt in the drivetrain, no mecanum, and a programming language we are familiar with in case we need to coordinate autos
- Subjective match scouting data is also useful, if accurate (driver skill/consistency, ability to avoid fouls when playing defense, etc.)
How to collect the data:
I strongly believe against collecting any data you can get from the field. Things I do like to collect:
- Photo of robot (whole robot, drivetrain/gearboxes, electrical, bumpers)
- Programming language
- Year dependent questions (e.g. for 2020: if they can climb, weight (in case you need to ballast to balance))
Practice match scouting;
- Watcing practice matches can sometimes be useful in place of pit scouting (e.g. in 2018, we asked teams if they had an auto, then tracked which ones actually crossed the lines in practice matches)
- Total numbers of game pieces in auto/teleop and endgame action are almost always the most important
- If you have limited resources, auto and endgame actions can usually be pulled from the field API
- The rest of the objective data I mostly detailed out above
- Subjective data is tricky, and unless the same people watch all robots, I think it usually does more harm than good
Pit scouting round 2:
- If making pick lists, I like to talk to teams towards the later end of quals matches (usually when they have ~2 matches left)
- If they have been having issues, try to understand if they can be fixed
- If there are specific things you’d like to see (especially autos) ask at this point
- For second picks, I always try to talk to them about what we would want them to do, so we don’t end up picking a team completely unwilling to play defense (especially important if you’re asking them to add/modify/remove a mechanism)
Match scouting round 2:
- If making pick lists, last day match scouting is usually mostly subjective
- Have a list of robots based on data from the days before, re-rank based on watching (heavy improvements, significalty better/worse under defense, etc.)
Field API data:
- I love using API data because it’s free – you don’t have to collect it
- It works very well for identifying rare actions (e.g. fuel OPR in 2017 could be used for fuel count extremely accurately)
- It works very well for auto and endgame points, but you need to note things like buddy climbs, climbs from fouls, etc.
- It can work as a sanity check for data (total numbers of balls scored must add up, etc.)
Using the data:
My probably overly complicated way of looking at both match strat and picklists is to look at each alliance’s best, average, and worst case scenarios for a given strat, and pit that against the opposing alliance’s best/avg/worst case scenarios. (Worst case is relative, I usually look at the situations of missed autos/endgame points)
For picklists, I consider each possible action of each team above us, how inner picking could affect rankings, and then work through the bracket of every team we will face (repeating for all possible brackets, and sometimes for simulated rankings). This is especially important when picking from alliances 2-3, since you might want to pick a “worse” team in order to make your bracket significantly easier. It can also affect whether you want to accept or decline. Usually, most scenarios converge into 2-3 main ones (and 2-3 picklists).
This can also be done on the last day once rankings are more finalized, but I prefer to have mostly finished lists by that point. IMO, the majority of banners are lost in the picklist stage. We’ve put in the effort to collect the data, and I want it to be used well.