I'm by no means an expert scout, and don't take this opinion as a final yes/no on the usefulness...
Quote:
|
The DifficultyScores for *our own team's* matches were really useful for deciding how to allot pit resources. We spent a lot of Day 2 helping repair future alliance mates for matches we expected to be really difficult.
|
I think this metric is less useful for me to use in this way, because I (and I assume many teams) can already generate this via intuition. I typically look at other teams at a event as "top tier", "average", and "bottom tier". If I'm up against two historically top tier teams with two rookies working on their kit bots, I know the match will be difficult. If I'm against three mid tier teams, I might do a little poking around to make sure once isn't a hidden top tier for the event, etc. So generally, between pre-scouting and an hour of walking the pits and just looking around, I can (and do) designate which of the matches will be more or less difficult for me.
Quote:
|
The QualsDifficulty score (sum of the match DifficultyScores) was really useful during scouting to decide if a team was getting a "free ride", or a "rough ride". This helped us prepare our selection list.
|
Without rigorously working through this, my confusion is how you can use qualification seed alone, to generate a metric that tells you if someone is artificially seeded too high (high seed, but low difficulty matches). This "weak team" is going to play out in your calculations as a "difficult team" because they are seeded high. I think the reason a lot of your data will converge to say the field was balanced is because you have a single input (QS of each team) going through a level of abstraction, that generates an output (difficulty score). I'm just stretching a bit here, but that is my gut feel?
Personally, I feel like the standard metrics taken on a team to measure their individual contribution to a match, provide a more clear picture anyway that current seeding. I'm more interested in whether the team is making auton points (consistently), making their shots, not causing fouls, etc. A goal of scouting is to generate a list of who are really the best robots at the competition, which may vary widely from the official seeding. From there, you can generate your picklist which should match your first list, with perhaps skipping robots that do not compliment your strategy well.
Kudos for trying to think outside of the box, and hopefully you can get some more discussion both for/against your idea, but it just didn't jump out at me as something I'd be interested in.