View Single Post
  #13   Spotlight this post!  
Unread 02-12-2012, 01:02
dcarr's Avatar
dcarr dcarr is offline
#HoldStrong
AKA: David Carr
FRC #3309 (Friarbots)
Team Role: Mentor
 
Join Date: Dec 2010
Rookie Year: 2009
Location: Anaheim
Posts: 954
dcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond reputedcarr has a reputation beyond repute
Re: How to Win a Robotics Competition

Quote:
Originally Posted by F22Rapture View Post
I agree 100%. While I'm sure this isn't true for everybody, I've always found that the majority of scouting data never really gets used to make a decision.
  • Robot's technical ability to complete each of the subtasks of autonomous, scoring, defending, and endgame from 0 to 10, judged to a certain extent on Thursday to be revised later
  • Overall performance from 0 to 10 from each of the 10 matches

Each of the subtasks will be weighted by importance to the game (30% for endgame, 40% scoring, etc.) and then averaged together, giving a single technical average to deal with.

The each of a team's performances from their10 matches are averaged together with the first 4 matches weighted 5%, second 2 matches weighted 10%, and the last four matches weighted 15%. This gives a single performance average to deal with.

It's very simple math, it's not much data to go through, and it can be easily done in Excel with no special scouting software. And once we've finished, it's very easy to sort, filter, and manipulate to get a good list of teams. It's perfect for smaller teams since it only takes 2-3 scouts to come up with a single performance score for each robot, and maybe a few comments to change their tech score and note their predominant strategies. And they can spend their time watching the match instead of writing.

It's not fancy, perhaps not Nate Silver levels of accuracy, but it's fast and can narrow a field of 40-80 teams down to only 10 or 15 almost instantly (with little information backlog), meaning that you can then focus your time and efforts on those teams when you start going around the pits to make connections.
If your final score for each team is based on its overall performance, however, this data isn't particularly useful in discerning the strengths of different teams and how they fit into an alliance.
__________________
Team 3309
2016 Los Angeles Chairman's Award Winner
2016 Orange County Regional Winner with 3476 & 6220
Team3309.org
Orange County Robotics Alliance
Reply With Quote