View Single Post
  #4   Spotlight this post!  
Unread 08-04-2015, 16:13
MikeE's Avatar
MikeE MikeE is offline
Wrecking nice beaches since 1990
no team (Volunteer)
Team Role: Engineer
 
Join Date: Nov 2008
Rookie Year: 2008
Location: New England -> Alaska
Posts: 381
MikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond reputeMikeE has a reputation beyond repute
Re: How many matches are really needed to determine final rankings?

Great analysis!
To expand a little on Jon's comments I see 2 main sources of QA variance throughout a competition:
  • (in)consistency of a team's own scoring
  • variation due to partner contribution, i.e. schedule effects
Great teams are both consistent and (since they score most of the points) are less subject to alliance partners/schedule. So they should sort out fairly quickly and be mostly stable.
District-sized events also have less schedule variance since teams are allied with the majority of other teams throughout the event.
Early events would be expected to have higher inconsistency as teams are still learning the game.
It would be interesting to see if there is less variability in the later district events where teams or on their 2nd or later event.
__________________
no stranger to the working end of a pencil
Reply With Quote