I agree 100%. While I'm sure this isn't true for everybody, I've always found that the majority of scouting data never really gets used to make a decision.
- Robot's technical ability to complete each of the subtasks of autonomous, scoring, defending, and endgame from 0 to 10, judged to a certain extent on Thursday to be revised later
- Overall performance from 0 to 10 from each of the 10 matches
Each of the subtasks will be weighted by importance to the game (30% for endgame, 40% scoring, etc.) and then averaged together, giving a single technical average to deal with.
The each of a team's performances from their10 matches are averaged together with the first 4 matches weighted 5%, second 2 matches weighted 10%, and the last four matches weighted 15%. This gives a single performance average to deal with.
It's very simple math, it's not much data to go through, and it can be easily done in Excel with no special scouting software. And once we've finished, it's very easy to sort, filter, and manipulate to get a good list of teams. It's perfect for smaller teams since it only takes 2-3 scouts to come up with a single performance score for each robot, and maybe a few comments to change their tech score and note their predominant strategies. And they can spend their time watching the match instead of writing.
It's not fancy, perhaps not Nate Silver levels of accuracy, but it's fast and can narrow a field of 40-80 teams down to only 10 or 15 almost instantly (with little information backlog), meaning that you can then focus your time and efforts on those teams when you start going around the pits to make connections.