I think this thread needs a reminder that there's no h in mecanum. ;) - Lil' Lavery [more]
 Chief Delphi Quals OPR Predicting Elims Results: Year By Year
 User Name Remember Me? Password
 portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 Thread Tools Rate Thread Display Modes
#1
02-24-2017, 05:39 PM
 microbuns Software + Drive Coach AKA: Sam Maier FRC #4917 (Sir Lancerbot) Team Role: Mentor Join Date: Jan 2015 Rookie Year: 2014 Location: Elmira, ON Posts: 183
Quals OPR Predicting Elims Results: Year By Year

TLDR
I looked at OPR and CCWM from qualification matches, and then added these numbers for each elims alliance. I then looked at who "should" have won each individual game in playoffs (based on the greater sum of each team's stats), and who actually got the higher score. Below are the "correct guessing" rates for OPR and CCWM from 2002 - 2016:
Code:
```YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2002   57.836   57.090    268
2003   62.162   60.811    370
2004   62.694   62.953    386
2005   67.435   65.994    347
2006   65.709   64.368    522
2007   53.835   55.572    691
2008   71.902   67.607    815
2009   66.667   62.583    906
2010   72.398   70.362    884
2011   74.153   71.926    1033
2012   68.746   65.272    1267
2013   71.534   68.708    1486
2014   69.188   65.102    1811
2015   70.672   69.073    2189
2016   70.007   66.013    2704```
Thoughts
• I think this "OPR Prediction Percentage" is a relatively good metric to tell how "good" OPR is for a given year.
• CCWM is almost always worse than OPR, which I found interesting.
• I thought 2015 would have a much better OPR prediction score than 2014 did, but it turns out they were within a percent of each other.

Method
I used a Python script, pulling data from TBA to get OPRs and playoff results. For each event in a year, I got the OPRs/CCWMs for each team (which TBA gets only from quals matches, I believe). I then went through each playoff game (not the Bo3), and compared the two alliances scores to determine the winner (so not exactly how the tournament worked in 2015, but pretty much every other year). If the summed OPRs/CCWMs predicted the same winner as who actually won that game, it was a correct prediction. I decided to simply ignore ties and bad data from TBA.

Code:
```# I beleive Python 2.7.9 or greater is required for this script
import requests
headers={'X-TBA-App-Id' : 'frc4917:customOPRCalculator:1'}

def get_alliance_stat(stats, alliance):
total_stat = 0;
for team in alliance['teams']:
total_stat += stats[team[3:]]
return total_stat

def prediction_percentage(stats, matches):
total = 0
correct = 0
for match in matches:
redAlliance = match['alliances']['red']
blueAlliance = match['alliances']['blue']
if (redAlliance['score'] == blueAlliance['score']): continue
try:
redAllianceStat = get_alliance_stat(stats, redAlliance);
blueAllianceStat = get_alliance_stat(stats, blueAlliance);
except KeyError:
continue

if ((redAllianceStat > blueAllianceStat) == (redAlliance['score'] > blueAlliance['score'])):
correct += 1
total += 1

return correct, total

def do_event(event_code, totals, playoffs_only=True):
url = 'https://www.thebluealliance.com/api/v2/event/' + event_code + '/stats'
r = requests.get(url, headers=headers)
stats_contents = r.json()
if not ('oprs' in stats_contents and stats_contents['oprs'] and stats_contents['ccwms']): return

url = 'https://www.thebluealliance.com/api/v2/event/' + event_code + '/matches'
r = requests.get(url, headers=headers)
matches_contents = r.json()
if playoffs_only:
matches_contents = [x for x in matches_contents if x['comp_level'] != 'qm']

correct, total = prediction_percentage(stats_contents['oprs'], matches_contents)
totals['opr_correct'] += correct
totals['num_games'] += total
correct, total = prediction_percentage(stats_contents['ccwms'], matches_contents)
totals['ccwm_correct'] += correct

for year in range(2002, 2017):
url = 'https://www.thebluealliance.com/api/v2/events/' + str(year)
r = requests.get(url, headers=headers)
events_contents = r.json()

totals = {'num_games': 0, 'opr_correct': 0, 'ccwm_correct': 0}
for event in events_contents:
do_event(event['key'], totals)

print(year)
print(totals)
print('OPR ' + str(totals['opr_correct'] / float(totals['num_games'])))
print('CCWM ' + str(totals['ccwm_correct'] / float(totals['num_games'])))```
I looked around a bit, and hadn't seen this type of evaluation of OPR on a year by year basis - sorry if I missed an existing one!

What do you think 2017's "OPR Prediction Percentage" will be?
#2
02-24-2017, 05:52 PM
 Eugene Fang The Blue Alliance FRC #0604 (Quixilver) Team Role: Alumni Join Date: Jan 2007 Rookie Year: 2000 Location: Bay Area, CA -> Pittsburgh, PA Posts: 1,061
Re: Quals OPR Predicting Elims Results: Year By Year

Neat! I was curious to see what the percentage of correctness would be if you just guessed the "Red" would win all playoff matches.

For 2016, across 2223 matches (this is lower than your 2704 probably because I did this on my local development server without any offseasons), guessing "Red" every time yields 66.93% accuracy. It's better than CCWM!
__________________
Eugene Fang
2010 Silicon Valley Regional Dean's List Finalist

Various FLL Teams - Student (2000-2006), Mentor (2007-2010)
FRC Team 604 - Student (2007-2010), Mentor/Remote Advisor (2011-Present)
FRC Team 1323 - Mentor/Remote Advisor (2011-2014)

The Blue Alliance | TBA GameDay | TBA Android App | TBA Blog | TBA Swag
#3
02-24-2017, 06:02 PM
 Caleb Sykes Knock-off Dr. Strange AKA: inkling16 no team (The Piztons) Join Date: Feb 2011 Rookie Year: 2009 Location: Minneapolis, Minnesota Posts: 1,770
Re: Quals OPR Predicting Elims Results: Year By Year

Interesting thoughts. Consider looking at how well the score differentials match the score differential predictions.

Quote:
 Originally Posted by microbuns [*]I think this "OPR Prediction Percentage" is a relatively good metric to tell how "good" OPR is for a given year.
In my opinion, it's really more of a metric of how closely playoff play mirrors qual play. There are better teams, different strategies, and sometimes different scoring criteria in playoffs than in quals. So be wary of using qual OPR to predict playoff matches.
#4
02-24-2017, 06:30 PM
 Caleb Sykes Knock-off Dr. Strange AKA: inkling16 no team (The Piztons) Join Date: Feb 2011 Rookie Year: 2009 Location: Minneapolis, Minnesota Posts: 1,770
Re: Quals OPR Predicting Elims Results: Year By Year

Here is an estimate for how good quals OPR is at predicting future quals OPR matches. Data come from my Comparison of Statistical Prediction Models paper.

Here is the percentage of correctly predicted quals matches for each year given all previous qual matches:

Code:
```year	correct predictions
2009	67.2%
2010	71.9%
2011	73.9%
2012	71.0%
2013	72.9%
2014	69.7%
2015	70.1%
2016	71.6%```
Both of our methods seem to provide very similar results with a correlation coefficient of 0.89, which surprises me, I would have anticipated the difference between quals and playoff matches to be higher.
#5
02-24-2017, 06:40 PM
 plnyyanks Data wins arguments. AKA: Phil Lopreiato no team (The Blue Alliance) Team Role: Engineer Join Date: Apr 2010 Rookie Year: 2010 Location: NYC Posts: 1,219
Re: Quals OPR Predicting Elims Results: Year By Year

I'll provide a tangential data point I calculated last year, it provides some additional insights into who wins playoff matches:

Quote:
 Originally Posted by plnyyanks I was curious too, so I whipped up a quick script to find out. Script and full output are available on my GitHub: https://gist.github.com/phil-lopreia...276a608edc7de4 Code: ```Overall 266 of 435 events were won by top seeds (61.1494252874 percent) In 2011, 41 of 62 events were won by top seeds (66.1290322581 percent) In 2012, 46 of 73 events were won by top seeds (63.0136986301 percent) In 2013, 46 of 81 events were won by top seeds (56.7901234568 percent) In 2014, 51 of 102 events were won by top seeds (50.0 percent) In 2015, 82 of 117 events were won by top seeds (70.0854700855 percent)``` So Car Nack's prediction of <25% would make this year a pretty significant outlier when compared to the past 5.
And here's the calculation from last year:

Code:
`In 2016, 82 of 134 events were won by top seeds (61.1940298507 percent)`
__________________
Phil Lopreiato - "It's a hardware problem"
Team 1124 (2010 - 2013), Team 1418 (2014), Team 2900 (2016)
The Blue Alliance | The Blue Alliance for Android | FRC Notebook

Last edited by plnyyanks : 02-24-2017 at 09:33 PM. Reason: I probably should fix that in the script....
#6
02-24-2017, 06:45 PM
 Caleb Sykes Knock-off Dr. Strange AKA: inkling16 no team (The Piztons) Join Date: Feb 2011 Rookie Year: 2009 Location: Minneapolis, Minnesota Posts: 1,770
Re: Quals OPR Predicting Elims Results: Year By Year

Code:
`In 2016, 134 of 82 events were won by top seeds (61.1940298507 percent)`
Man, top seeds were so good last year that they were winning events they didn't even attend. Car Nack has gotten things wrong before, but he wasn't even close this time.
#7
02-24-2017, 07:25 PM
 Brian Maher Undeserving of a fan club FRC #2791 (Shaker Robotics) Team Role: College Student Join Date: Apr 2014 Rookie Year: 2012 Location: Troy, NY Posts: 1,367
Re: Quals OPR Predicting Elims Results: Year By Year

Here are a couple questions: How often do red alliances with the higher OPR win? How often do blue alliances with the higher OPR win?
__________________
2016-present, Mentor, FRC 2791 - Shaker Robotics
2018: Central New York Winner (2791, 340, 5030, 7081), Tech Valley Finalist (2791, 20, 3624, 5123), Darwin Division QF, IRI Invite, Ruckus Winners (3015, 2791, 6868, 3173), Robot Rumble Winners (2791, 5254, 3044)
2017: Tech Valley Winner (333, 2791, 5952), Curie Division QF, IRI Invite, Robot Rumble Winner (2791, 5881, 1880)
2016: Battlecry Winner (195, 2791, 501), Robot Rumble Winner (2791, 195, 6463)

2012-2015, Student, FRC 1257 - Parallel Universe
2015: North Brunswick Finalist (11, 193, 1257)
2014: Clifton Winner (1626, 869, 1257)

2017-present, SLFF - Questionable Decisionmakers
2018: Regular Season Finalist, Championship 3rd Place, Fantasy FIM Finalist
2017: Regular Season Winner, Championship Winner, Fantasy FIM Finalist, IRI Winner
#8
02-24-2017, 08:12 PM
 microbuns Software + Drive Coach AKA: Sam Maier FRC #4917 (Sir Lancerbot) Team Role: Mentor Join Date: Jan 2015 Rookie Year: 2014 Location: Elmira, ON Posts: 183
Re: Quals OPR Predicting Elims Results: Year By Year

Quote:
 Originally Posted by Brian Maher Here are a couple questions: How often do red alliances with the higher OPR win? How often do blue alliances with the higher OPR win?
Just took a look at this - really interesting question.

Code:
```YEAR  BLUE FAVOURED WIN%  RED FAVOURED WIN%
2002       42.168           64.864
2003       52.381           66.037
2004       56.666           65.413
2005       54.081           72.690
2006       54.961           69.309
2007       45.209           61.904
2008       55.102           77.221
2009       55.326           72.032
2010       51.086           78.000
2011       51.063           80.952
2012       54.252           74.082
2013       58.726           76.648
2014       59.772           72.210
2015       45.681           76.958
2016       54.517           74.723```
#9
03-06-2018, 12:30 AM
 microbuns Software + Drive Coach AKA: Sam Maier FRC #4917 (Sir Lancerbot) Team Role: Mentor Join Date: Jan 2015 Rookie Year: 2014 Location: Elmira, ON Posts: 183
Re: Quals OPR Predicting Elims Results: Year By Year

Since it's intuitive that this year CCWM should predict better than OPR, I decided to also run this script for 2017 and the first week of 2018.

Code:
```YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2017   61.440   60.788    3374
2018   70.552   70.552    326```
It could be that we have a low sample size (so I will have to continue to check this as we get more competitions), but this surprised me. First, it's strange that 2018's OPR prediction is so close to years where it seemed quite strong, like 2015. Secondly, CCWM predicted exactly as many matches correct as OPR did, and there was 60 matches they predicted differently (so ~18.5%).

Perhaps OPR is still a good metric this year?
#10
03-06-2018, 01:01 AM
 Ginger Power Founder of COR Robotics AKA: Ryan Swanson no team Join Date: Jan 2014 Rookie Year: 2013 Location: Minnesota Posts: 1,541
Re: Quals OPR Predicting Elims Results: Year By Year

Quote:
 Originally Posted by microbuns Since it's intuitive that this year CCWM should predict better than OPR, I decided to also run this script for 2017 and the first week of 2018. Code: ```YEAR OPR(%) CCWM(%) NUMBER OF PLAYOFF GAMES 2017 61.440 60.788 3374 2018 70.552 70.552 326``` It could be that we have a low sample size (so I will have to continue to check this as we get more competitions), but this surprised me. First, it's strange that 2018's OPR prediction is so close to years where it seemed quite strong, like 2015. Secondly, CCWM predicted exactly as many matches correct as OPR did, and there was 60 matches they predicted differently (so ~18.5%). Perhaps OPR is still a good metric this year?
This is a really interesting find... as Karthik stated in another thread, OPR is useful for determining who is good at maintaining ownership of the Scale and Home Switch. The ability to do those things is very obviously critical to a winning playoff alliance.

My thought is that OPR is good for predicting the value of the top cycling robots on an alliance, but not good at predicting the value of the 2nd pick on an alliance.

To me the fact that OPR is a good predictor of Playoff Success might indicate that the 3rd robot on a playoff alliance is close to inconsequential. This makes a little sense since most alliances just used their 3rd robot to fill the Vault while the top 2 alliance members cycled cubes.

I'll be interested to see if this metric changes as the meta of the game changes. I have a feeling the role of the 3rd robot will become more complex in future weeks, and the value that the 3rd robot adds to an alliance will go up as well.

In the end though this game is won or lost by ownership points. Climbing doesn't matter and neither does the Vault if your alliance can't own anything for a fair duration of the match. OPR is good for predicting ownership so it could turn out to be a pretty solid predictor.
#11
03-06-2018, 01:30 AM
 Citrus Dad Business and Scouting Mentor AKA: Richard McCann FRC #1678 (Citrus Circuits) Team Role: Mentor Join Date: May 2012 Rookie Year: 2012 Location: Davis Posts: 1,354
Re: Quals OPR Predicting Elims Results: Year By Year

I think the OPR will be a good predictor of relative strength within an event. It might even work better than scouting data because of the vagaries around the effectiveness of individual cubes in scoring ownership points.

However, I don't think the OPRs are comparable at all across events. The OPR is about ownership shares, and that is largely a function of the opponents a team faces. That has not been true in the past where scoring was either linear or step function. Even where there was a score cap (2015) alliances didn't approach the theoretical ceiling. That's not the case this year. So OPR will be useless for ranking the "top" teams this year.
__________________
#12
03-06-2018, 11:23 AM
 Whatever Registered User FRC #2502 Join Date: Apr 2016 Location: MN Posts: 310
Re: Quals OPR Predicting Elims Results: Year By Year

One thing that varies year to year is the fundamental predictability of the game. This year's game may be more predictable than previous years.

The ultimate simulation of a possible match is to take the robots involved and have them run a match. We effectively do that in eliminations by doing best out of 3. If a game sees a lot of 2-1 elimination match-ups it suggests the game itself isn't that predictable. If it sees a lot of 2-0 elimination match ups it is predictable. I would suggest adding a column with the percentage of 2-0 elimination results.

2016 and 2017 had a lot more 2-1 results in week 1 than 2018 did.

Plus I believe the numbers are on a per match basis not on a who advances basis. More 2-1 results will pull even the best predictor towards 67%. I am seeing that a perfect simulator in 2018 for week 1 would rate about 83%. In 2016 a perfect simulator would rate about 75% (week 1 only).
#13
03-13-2018, 09:59 PM
 microbuns Software + Drive Coach AKA: Sam Maier FRC #4917 (Sir Lancerbot) Team Role: Mentor Join Date: Jan 2015 Rookie Year: 2014 Location: Elmira, ON Posts: 183
Re: Quals OPR Predicting Elims Results: Year By Year

Week 2 data:
Code:
```YEAR   OPR(%)   CCWM(%)   NUMBER OF PLAYOFF GAMES
2018   72.272   71.887    779```
OPR is now barely ahead of CCWM - I think it's now safe to say that the first week was not an anomaly.
#14
03-13-2018, 10:09 PM
 Caleb Sykes Knock-off Dr. Strange AKA: inkling16 no team (The Piztons) Join Date: Feb 2011 Rookie Year: 2009 Location: Minneapolis, Minnesota Posts: 1,770
Re: Quals OPR Predicting Elims Results: Year By Year

Quote:
 Originally Posted by microbuns Week 2 data: Code: ```YEAR OPR(%) CCWM(%) NUMBER OF PLAYOFF GAMES 2018 72.272 71.887 779``` OPR is now barely ahead of CCWM - I think it's now safe to say that the first week was not an anomaly.
It doesn't really surprise me, from what I found here, OPR had more predictive power than CCWM every year from 2008-2016. My guess for the reason is that the pair of data points (red and blue scores) you get from OPR each have value that the winning margin can't capture alone. I think we would all agree that an alliance losing 249 to 250 this year is probably stronger than an alliance that wins 1-0, but CCWM would value the latter slightly more than the former, and OPR would value the former much more than the latter.

Last edited by Caleb Sykes : 03-13-2018 at 10:13 PM.

 Thread Tools Display Modes Rate This Thread Linear Mode Rate This Thread: 5 : Excellent 4 : Good 3 : Average 2 : Bad 1 : Terrible

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts vB code is On Smilies are On [IMG] code is On HTML code is Off
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Announcements     User Announcements FIRST     General Forum     Rumor Mill     Career     Robot Showcase Technical     Technical Discussion     Robotics Education and Curriculum     Motors     Electrical         CAN     Programming         NI LabVIEW         C/C++         Java         Python     Sensors     Control System     Pneumatics     Kit & Additional Hardware     CAD         Inventor         SolidWorks         Creo     IT / Communications         3D Animation and Competition         Website Design/Showcase         Videography and Photography         Computer Graphics Competition     Awards         Chairman's Award     Rules/Strategy         Scouting     Team Organization         Fundraising         Starting New Teams         Finding A Team         College Teams     Championship Event     Regional Competitions     District Events     Off-Season Events     Thanks and/or Congrats     FRC Game Design     OCCRA         OCCRA Q&A Other     Chit-Chat         Games/Trivia         Fantasy FIRST     Car Nack's Corner     College & University Education     Dean Kamen's Inventions     FIRST-related Organizations         The Blue Alliance     FIRST In the News...     FIRST Lego League     FIRST Tech Challenge     VEX     Televised Robotics     Math and Science     Robot in 3 Days (RI3D)     NASA Discussion ChiefDelphi.com Website     CD Forum Support     Extra Discussion

All times are GMT -5. The time now is 07:02 AM.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.

 -- English (12 hour) -- English (24 hour) Contact Us - Chief Delphi - Rules - Archive - Top

Powered by vBulletin®
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi