However, given the sheer number of discs available, the ability of teams to both human-load and floor-load, and the different climbing mechanisms, this year’s estimations could be much more accurate than the last few. (Especially in relation to minibots, ubertubes, and coopertition points.)
…Which is why OPR sucked as a metric of team performance in 2011.
OPR in 2013: pretty good until you get to really high scores where multiple teams on an alliance could drain the alliance station of discs on their own.
*Still waiting for the rest of the Qual Match data from Bridgewater, but in the meantime here’s an interesting look at the OPR and CCWM based on Week6 events.
Crossroads OPR from Friday matches were 16 for 25 in predicting outcomes, which is much worse than Boilermaker’s 20 for 24.
Can you elaborate how you do the prediction?
I used OPRNet’s predictions after the Friday matches, and then kept track today.
I count only 8 matches not correctly predicted:
64
65
70
72
73
76
78
84
… what’s the 9th one?
*While we’re waiting for Ed to update his superb scouting spreadsheet…
OPR & CCWM World Rankings based on Weeks 1 thru 6 Qual Match data
Weeks1thru6 World OPR&CCWM revA.xls (259 KB)
Weeks1thru6 World OPR&CCWM revA.xls (259 KB)
71 was a tie.
OK, let’s call it 16 out of 24 then 
When I did counts using OPR data and match outcome “predictions”, I always counted ties as wrong.
Unless there’s a confidence interval, I’m not exactly sure how to treat a tie statistically. And labeling a match “too close to call” isn’t any fun. 
Ties are outliers in binomial situations. Because you cant have 3 options for two choices. Which is why ether just excused it and lowered the sample size.
Right, but hypothetically if OPR predictions said the match would be 100-50, and there was a tie 50-50, the OPR prediction is wrong and shouldn’t be excused as a tie.
hypothetically if OPR prediction said the match would be 50.001-49.999, and there was a tie 50-50, should the OPR prediction be considered wrong and not excused as a tie? 
Maybe we should start publishing the residual vector (or the covariance matrix?) along with the OPR 
*
*
One metric I’ve used to avoid this problem is the mean and standard deviation of the distribution of alliance score residuals. I also considered using winning margin residuals, but decided against.
I’m sure that integer rounding can be excused. 
*FWIW, I calculated OPR using all qual data for weeks 1 thru 5 PLUS week6 Friday, and used that to predict Saturday Qual matches at Crossroads.
It got Matches 65 and 84 right, but got Match 80 wrong.
64
70
72
73
76
78
80
I’m wondering if OPR predictions at the Championship event will be similar; Crossroads had a fairly deep field with an average OPR of about 27.6.
*Max Event OPR achieved by each of the 2,490 teams
Max EventOPR.xls (146 KB)
Max EventOPR.xls (146 KB)
Wait a minute, Ether. How can average CCWM not be zero? For each event, the sum of CCWM should be zero. In most cases, they are. But there are some events that has a small positive or negative number. I think it must be due to round off error. Is that correct? Then how can the average CCWM be -2.11?