Thread: Einstein 2013
View Single Post
  #22   Spotlight this post!  
Unread 29-04-2013, 21:46
DMetalKong's Avatar
DMetalKong DMetalKong is offline
Registered User
AKA: David K.
no team
Team Role: College Student
 
Join Date: Jan 2008
Rookie Year: 2006
Location: Bridgewater
Posts: 144
DMetalKong is a jewel in the roughDMetalKong is a jewel in the roughDMetalKong is a jewel in the rough
Send a message via AIM to DMetalKong
Re: Einstein 2013

Quote:
Originally Posted by Citrus Dad View Post
While the OPRs from the individual divisions may not have been useful, probably because of differences in division-wide strategies (e.g. whether there are FCSs), the OPRs are very useful in predicting matches within divisions. I haven't checked our results yet, but it looks like our match predictions in Curie were correct in 80% of the matches. That's statistically well beyond significantly different from pure chance.

Also, OPRs are very useful when drawn from a common pool such as a regional or division. Using direct quantitative statistics in our two regionals, we were able to predict the winners of 6 of 7 elimination rounds in both regionals. The only exceptions were 1) the 4 vs 5 round (obviously expected to be the closest most unpredictable) and 2) when the top alliance suffered a mechanical failure for two matches.

Anyone who ignores the predictive power of OPR or other statistics does so at their own peril.
Emphasis mine.

One issue that I experienced when a teammate and I put together an OPR tracker for competitions was that while OPR was very good at "predicting" the outcome of matches that were already measured, it was not as powerful for predicting future matches. We attributed this to the fact that a) the performance of a alliance depends on more than the performance of the individual robots and b) alliances may fare wildly differently when facing different strategies from the opposing side.