Quote:
Originally Posted by Citrus Dad
...we have 6 scouts, one per robot, tracking scoring and other measurable metrics. This is the basis for our 1st pick list. We then have 2 other "superscouts" who watch each alliance. They rank the teams within each alliance 1 to 3 across 4 parameters for qualitative measures such as evasion, blocking, speed and pushing ability. Across many matches, these rankings provide fairly good measures of relative abilities. We use this data primarily for our 2nd pick list.
|
This is the system we've found to work best. We have level I scouts, level II scouts, and defense coordinators.
Level I scouts track solely quantitative data on Windows tablets, 1 scout per tablet, 6 scouts per match. Level II scouts track qualitative data on laptops, usually set up behind the level I scouts in the stands, 1 scout per laptop, 3 scouts per match, each watching 2 robots. Level II scouts are normally pre-selected, as they must demonstrate the ability to watch 2 robots at a time and still record accurate qualitative data. Level I scouts can be any team member not on the drive team/pit crew.
We've worked out a system this season to be hardwired into the stands amongst each other, so all the data collected from each tablet/laptop will update in real-time across the system. Then, 2 or 3 matches prior to the one we're competing in, we print out strategy sheets, which are created to compile all the data collected throughout the day about the teams we're with and against in our match. This allows our drive team to get an overview of what they're in for, and lets us help them decide on a strategy.
Both our qualitative and quantitative data is pieced together to make up the strategy sheets, so our strategists get a complete picture of every robot in the match when they see them.