Quote:
Originally Posted by Citrus Dad
While the OPRs from the individual divisions may not have been useful, probably because of differences in division-wide strategies (e.g. whether there are FCSs), the OPRs are very useful in predicting matches within divisions. I haven't checked our results yet, but it looks like our match predictions in Curie were correct in 80% of the matches. That's statistically well beyond significantly different from pure chance.
Also, OPRs are very useful when drawn from a common pool such as a regional or division. Using direct quantitative statistics in our two regionals, we were able to predict the winners of 6 of 7 elimination rounds in both regionals. The only exceptions were 1) the 4 vs 5 round (obviously expected to be the closest most unpredictable) and 2) when the top alliance suffered a mechanical failure for two matches.
Anyone who ignores the predictive power of OPR or other statistics does so at their own peril.
|
Emphasis mine.
One issue that I experienced when a teammate and I put together an OPR tracker for competitions was that while OPR was very good at "predicting" the outcome of matches that were already measured, it was not as powerful for predicting future matches. We attributed this to the fact that a) the performance of a alliance depends on more than the performance of the individual robots and b) alliances may fare wildly differently when facing different strategies from the opposing side.