OPR after Week Five Events

OHHHHHHHHH. Wow. Now I feel a little stupid. wow. wow. Anyway, thanks Ether and Carr!

Curious me is wondering how to actually predict the outcomes of a match using OPR? I have a bit of AP Stat background, but not much in matrix math. Is there a simple, easy method (Ie totaling OPR for the Red and Blue alliance and the higher is the winner?) Or, if I’ve managed to miss the thread in my filtering that describes this, could someone direct me to it?

Considering that OPR is supposed to just represent the average number of points one robot will score, I assume you can just total them.

That’s what I figured. I just wasn’t sure if it was more advanced than that.

Yeah, pretty much. I haven’t seen any other approaches to estimating an alliances score. It isn’t perfect, especially in scoring-limited games. 2011 is a great example. 3 teams that always scored 30 minibot points would only score 50 points, but OPR would have predicted 90. Similar case for combinations of teams that each score a high logo of tubes (they aren’t going to get points for 3 high logos) or for combinations of teams that each use a lot of game pieces (this year, 3 teams that each typically score 40 discs probably aren’t going to score 120 discs).

However, given the sheer number of discs available, the ability of teams to both human-load and floor-load, and the different climbing mechanisms, this year’s estimations could be much more accurate than the last few. (Especially in relation to minibots, ubertubes, and coopertition points.)

…Which is why OPR sucked as a metric of team performance in 2011.

OPR in 2013: pretty good until you get to really high scores where multiple teams on an alliance could drain the alliance station of discs on their own.

*Still waiting for the rest of the Qual Match data from Bridgewater, but in the meantime here’s an interesting look at the OPR and CCWM based on Week6 events.

Crossroads OPR from Friday matches were 16 for 25 in predicting outcomes, which is much worse than Boilermaker’s 20 for 24.

Can you elaborate how you do the prediction?

I used OPRNet’s predictions after the Friday matches, and then kept track today.

I count only 8 matches not correctly predicted:

64
65
70
72
73
76
78
84

… what’s the 9th one?

*While we’re waiting for Ed to update his superb scouting spreadsheet…

OPR & CCWM World Rankings based on Weeks 1 thru 6 Qual Match data

Weeks1thru6 World OPR&CCWM revA.xls (259 KB)


Weeks1thru6 World OPR&CCWM revA.xls (259 KB)

71 was a tie.

OK, let’s call it 16 out of 24 then :slight_smile:

When I did counts using OPR data and match outcome “predictions”, I always counted ties as wrong.

Unless there’s a confidence interval, I’m not exactly sure how to treat a tie statistically. And labeling a match “too close to call” isn’t any fun. :stuck_out_tongue:

Ties are outliers in binomial situations. Because you cant have 3 options for two choices. Which is why ether just excused it and lowered the sample size.

Right, but hypothetically if OPR predictions said the match would be 100-50, and there was a tie 50-50, the OPR prediction is wrong and shouldn’t be excused as a tie.

hypothetically if OPR prediction said the match would be 50.001-49.999, and there was a tie 50-50, should the OPR prediction be considered wrong and not excused as a tie? :slight_smile:

Maybe we should start publishing the residual vector (or the covariance matrix?) along with the OPR :slight_smile:
*
*

One metric I’ve used to avoid this problem is the mean and standard deviation of the distribution of alliance score residuals. I also considered using winning margin residuals, but decided against.