Quals OPR Predicting Elims Results: Year By Year

TLDR
I looked at OPR and CCWM from qualification matches, and then added these numbers for each elims alliance. I then looked at who “should” have won each individual game in playoffs (based on the greater sum of each team’s stats), and who actually got the higher score. Below are the “correct guessing” rates for OPR and CCWM from 2002 - 2016:

YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2002   57.836   57.090    268
2003   62.162   60.811    370
2004   62.694   62.953    386
2005   67.435   65.994    347
2006   65.709   64.368    522
2007   53.835   55.572    691
2008   71.902   67.607    815
2009   66.667   62.583    906
2010   72.398   70.362    884
2011   74.153   71.926    1033
2012   68.746   65.272    1267
2013   71.534   68.708    1486
2014   69.188   65.102    1811
2015   70.672   69.073    2189
2016   70.007   66.013    2704

Thoughts

  • I think this “OPR Prediction Percentage” is a relatively good metric to tell how “good” OPR is for a given year.
  • CCWM is almost always worse than OPR, which I found interesting.
  • I thought 2015 would have a much better OPR prediction score than 2014 did, but it turns out they were within a percent of each other.

Method
I used a Python script, pulling data from TBA to get OPRs and playoff results. For each event in a year, I got the OPRs/CCWMs for each team (which TBA gets only from quals matches, I believe). I then went through each playoff game (not the Bo3), and compared the two alliances scores to determine the winner (so not exactly how the tournament worked in 2015, but pretty much every other year). If the summed OPRs/CCWMs predicted the same winner as who actually won that game, it was a correct prediction. I decided to simply ignore ties and bad data from TBA.

# I beleive Python 2.7.9 or greater is required for this script
import requests
headers={'X-TBA-App-Id' : 'frc4917:customOPRCalculator:1'}

def get_alliance_stat(stats, alliance):
    total_stat = 0;
    for team in alliance'teams']:
        total_stat += stats[team[3:]]
    return total_stat

def prediction_percentage(stats, matches):
    total = 0
    correct = 0
    for match in matches:
        redAlliance = match'alliances']'red']
        blueAlliance = match'alliances']'blue']
        if (redAlliance'score'] == blueAlliance'score']): continue
        try:
            redAllianceStat = get_alliance_stat(stats, redAlliance);
            blueAllianceStat = get_alliance_stat(stats, blueAlliance);
        except KeyError:
            continue

        if ((redAllianceStat > blueAllianceStat) == (redAlliance'score'] > blueAlliance'score'])):
            correct += 1
        total += 1

    return correct, total


def do_event(event_code, totals, playoffs_only=True):
    url = 'https://www.thebluealliance.com/api/v2/event/' + event_code + '/stats'
    r = requests.get(url, headers=headers)
    stats_contents = r.json()
    if not ('oprs' in stats_contents and stats_contents'oprs'] and stats_contents'ccwms']): return

    url = 'https://www.thebluealliance.com/api/v2/event/' + event_code + '/matches'
    r = requests.get(url, headers=headers)
    matches_contents = r.json()
    if playoffs_only:
        matches_contents = [x for x in matches_contents if x['comp_level'] != 'qm']


    correct, total = prediction_percentage(stats_contents'oprs'], matches_contents)
    totals'opr_correct'] += correct
    totals'num_games'] += total
    correct, total = prediction_percentage(stats_contents'ccwms'], matches_contents)
    totals'ccwm_correct'] += correct

for year in range(2002, 2017):
    url = 'https://www.thebluealliance.com/api/v2/events/' + str(year)
    r = requests.get(url, headers=headers)
    events_contents = r.json()

    totals = {'num_games': 0, 'opr_correct': 0, 'ccwm_correct': 0}
    for event in events_contents:
        do_event(event'key'], totals)

    print(year)
    print(totals)
    print('OPR ' + str(totals'opr_correct'] / float(totals'num_games'])))
    print('CCWM ' + str(totals'ccwm_correct'] / float(totals'num_games'])))

I looked around a bit, and hadn’t seen this type of evaluation of OPR on a year by year basis - sorry if I missed an existing one!

What do you think 2017’s “OPR Prediction Percentage” will be?

Neat! I was curious to see what the percentage of correctness would be if you just guessed the “Red” would win all playoff matches.

For 2016, across 2223 matches (this is lower than your 2704 probably because I did this on my local development server without any offseasons), guessing “Red” every time yields 66.93% accuracy. It’s better than CCWM!

Interesting thoughts. Consider looking at how well the score differentials match the score differential predictions.

[/li]
In my opinion, it’s really more of a metric of how closely playoff play mirrors qual play. There are better teams, different strategies, and sometimes different scoring criteria in playoffs than in quals. So be wary of using qual OPR to predict playoff matches.

Here is an estimate for how good quals OPR is at predicting future quals OPR matches. Data come from my Comparison of Statistical Prediction Models paper.

Here is the percentage of correctly predicted quals matches for each year given all previous qual matches:

year	correct predictions
2009	67.2%
2010	71.9%
2011	73.9%
2012	71.0%
2013	72.9%
2014	69.7%
2015	70.1%
2016	71.6%

Both of our methods seem to provide very similar results with a correlation coefficient of 0.89, which surprises me, I would have anticipated the difference between quals and playoff matches to be higher.

I’ll provide a tangential data point I calculated last year, it provides some additional insights into who wins playoff matches:

And here’s the calculation from last year:


In 2016, 82 of 134 events were won by top seeds (61.1940298507 percent)


In 2016, 134 of 82 events were won by top seeds (61.1940298507 percent)

Man, top seeds were so good last year that they were winning events they didn’t even attend. Car Nack has gotten things wrong before, but he wasn’t even close this time.

Here are a couple questions: How often do red alliances with the higher OPR win? How often do blue alliances with the higher OPR win?

Just took a look at this - really interesting question.

YEAR  BLUE FAVOURED WIN%  RED FAVOURED WIN%
2002       42.168           64.864
2003       52.381           66.037
2004       56.666           65.413
2005       54.081           72.690
2006       54.961           69.309
2007       45.209           61.904
2008       55.102           77.221
2009       55.326           72.032
2010       51.086           78.000
2011       51.063           80.952
2012       54.252           74.082
2013       58.726           76.648
2014       59.772           72.210
2015       45.681           76.958
2016       54.517           74.723

Since it’s intuitive that this year CCWM should predict better than OPR, I decided to also run this script for 2017 and the first week of 2018.

YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2017   61.440   60.788    3374
2018   70.552   70.552    326

It could be that we have a low sample size (so I will have to continue to check this as we get more competitions), but this surprised me. First, it’s strange that 2018’s OPR prediction is so close to years where it seemed quite strong, like 2015. Secondly, CCWM predicted exactly as many matches correct as OPR did, and there was 60 matches they predicted differently (so ~18.5%).

Perhaps OPR is still a good metric this year?

This is a really interesting find… as Karthik stated in another thread, OPR is useful for determining who is good at maintaining ownership of the Scale and Home Switch. The ability to do those things is very obviously critical to a winning playoff alliance.

My thought is that OPR is good for predicting the value of the top cycling robots on an alliance, but not good at predicting the value of the 2nd pick on an alliance.

To me the fact that OPR is a good predictor of Playoff Success might indicate that the 3rd robot on a playoff alliance is close to inconsequential. This makes a little sense since most alliances just used their 3rd robot to fill the Vault while the top 2 alliance members cycled cubes.

I’ll be interested to see if this metric changes as the meta of the game changes. I have a feeling the role of the 3rd robot will become more complex in future weeks, and the value that the 3rd robot adds to an alliance will go up as well.

In the end though this game is won or lost by ownership points. Climbing doesn’t matter and neither does the Vault if your alliance can’t own anything for a fair duration of the match. OPR is good for predicting ownership so it could turn out to be a pretty solid predictor.

I think the OPR will be a good predictor of relative strength within an event. It might even work better than scouting data because of the vagaries around the effectiveness of individual cubes in scoring ownership points.

However, I don’t think the OPRs are comparable at all across events. The OPR is about ownership shares, and that is largely a function of the opponents a team faces. That has not been true in the past where scoring was either linear or step function. Even where there was a score cap (2015) alliances didn’t approach the theoretical ceiling. That’s not the case this year. So OPR will be useless for ranking the “top” teams this year.

One thing that varies year to year is the fundamental predictability of the game. This year’s game may be more predictable than previous years.

The ultimate simulation of a possible match is to take the robots involved and have them run a match. We effectively do that in eliminations by doing best out of 3. If a game sees a lot of 2-1 elimination match-ups it suggests the game itself isn’t that predictable. If it sees a lot of 2-0 elimination match ups it is predictable. I would suggest adding a column with the percentage of 2-0 elimination results.

2016 and 2017 had a lot more 2-1 results in week 1 than 2018 did.

Plus I believe the numbers are on a per match basis not on a who advances basis. More 2-1 results will pull even the best predictor towards 67%. I am seeing that a perfect simulator in 2018 for week 1 would rate about 83%. In 2016 a perfect simulator would rate about 75% (week 1 only).

Week 2 data:

YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2018   72.272   71.887    779

OPR is now barely ahead of CCWM - I think it’s now safe to say that the first week was not an anomaly.

It doesn’t really surprise me, from what I found here, OPR had more predictive power than CCWM every year from 2008-2016. My guess for the reason is that the pair of data points (red and blue scores) you get from OPR each have value that the winning margin can’t capture alone. I think we would all agree that an alliance losing 249 to 250 this year is probably stronger than an alliance that wins 1-0, but CCWM would value the latter slightly more than the former, and OPR would value the former much more than the latter.