OPR after Week Five Events

The OPR/CCWM numbers up to Week 5 events have been posted, please see

http://www.chiefdelphi.com/media/papers/2174

All events up to Week 5 are now included. I fixed a small bug for each event page in column T for “Record”. The formula was wrong for rows beyond the first 37 teams.

If you find any error or have any questions, please let me know.

*Weeks 1 thru 5 OPR & CCWM correlation to actual match results.

CCWM Column Headings & meaning:

*
*
E	event
M	Match
r1	r2	r3	b1	b2	b3	red & blue alliance teams
rs	red score
bs	blue score
crs	sum of red alliance teams' CCWMs
cbs	sum of blu alliance teams' CCWMs
rw	rs>bs?
crw	crs>cbs?
ccp	rw==crw? (CCWM correct prediction of match outcome?)

OPR Column Headings & meaning:

*
*
E	event
M	Match
r1	r2	r3	b1	b2	b3	red & blue alliance teams
rs	red score
bs	blue score
ors	OPR "expected" red alliance score
obs	OPR "expected" blue alliance score
drs	rs-ors
dbs	bs-obs
rgo	rs>ors?
bgo	bs>obs?
rw	rs>bs?
orw	ors>obs? (red win based on OPR scores?)
ocp	rw==orw? (OPR correct prediction of match outcome?)

*

Weeks 1 thru 5 OPR v Actual.xls (1.03 MB)
Weeks 1 thru 5 CCWM vs Actual.xls (823 KB)


Weeks 1 thru 5 OPR v Actual.xls (1.03 MB)
Weeks 1 thru 5 CCWM vs Actual.xls (823 KB)

Once again, thank you Ed and Ether for posting this.

Summary of global OPR and CCWM match win/loss predictions: 81.72% and 82.47% respectively.

Does anyone have an idea of the number of teams yet to compete in week 6?

278 teams in Ed’s database haven’t competed yet.

That would be a rough estimate. Some teams may have dropped out and not compete in any events.

Oh, and here’s a link to updated Twitter data for weeks 1 thru 5:

http://www.chiefdelphi.com/forums/showpost.php?p=1254753&postcount=1

As a second-year non-engineering mentor, I was wondering all off-season what my role on the team would be now the the team was able to mostly sustain itself. I discovered OPR and related statistics this year and they have kept me busy with scouting every week.

Thank you for all the work you do correlating this data each week! It has made FRC so much more interesting for me this year! I love watching the webcasts of high-OPR teams like 1114, and it doesn’t hurt that OPR has made my own team look pretty darn good this year, (4th and 2nd in OPR, respectively, in their two district events, even without a banner to show for it).

So from a statistics geek from PA… thanks again!

It’s bittersweet that we have the 5th highest OPR in the world and didn’t qualify for Championship…

Thanks for this. It was a pleasure to meet and win with you last Saturday.

What data are you looking at ?

The “Max OPR” column in the OPR Results sheet. Returns the highest OPR earned by that team at any event. Can be a better number to use when teams make significant improvements to their robots midseason (adding 7 disk auton, getting 30 point hang working, tuning code, etc).

Yes, he was using the Max OPR and they have the 5th highest OPR out of all the teams at different regionals. That is a fact. However some people argue about some regionals being stronger than others. I don’t know how strong Alamo is and I do not want people to get into a debate. 624 is a great team. Their world OPR ranking is 21.

The reason Ether was questioning is because when we compare teams across different regionals/district, we use World OPR and takes all interactions of teams into account. However it ranks team higher if they have been consistently good at different events than teams that did relatively not as well in their early events and improve a lot in later events. Both ways are valid in considering how good a team is.

My suggestion for 624 is to push for a district model in Texas. In the district model, you don’t need to win an event to qualify. You just need to be consistently good. In fact it is possible for the 2nd round pick winning alliance team to not make it to State Championship or World Championship in the district model if they were just lucky in one event and do poorly in the other.

*Here’s a slightly different view of the data, FWIW.

I computed World OPR rankings based on Week5 data only, Week4 data only, Week3 data only, Weeks3&4 data only, Weeks3,4,&5 data only, and Weeks4&5 data only.

Results:

*
***Weeks           624 Rank**

5 only            3
4 only            N/A
3 only            3
4&5 only          4
3,4,&5 only       6
3&4 only          9

So yeah, Team 624 is doing quite well.

*

OPRs.zip (50.3 KB)


OPRs.zip (50.3 KB)

I guess the thing to note in the 624 discussion is that the OPR is a number which reflects past performance and does not predict future results. Every match is open to any alliance winning, whether through multiple miscellaneous technical fouls, superior play, random Murphyisms, or strategic miscellany. It sounds like 624, though playing strongly, has fallen victim to random chance.

Does not predict 100%. But its correlation with match win/loss outcomes is unarguably statistically significant (witness earlier posts in this thread), and it is one among many useful scouting tools.

Thanks Ed & Ether for this great resource.

However I do need to quibble about how OPR/CCWM is being discussed, as exemplified by several earlier posts. Picking one of these:

Those numbers are misleading since OPR/CCWM are calculated from the same data which are being used to test their predictive power, i.e. the training & test sets are the same.

It’s analogous to (although not as extreme as) stating that final qualification ranking is a good predictor of the performance in earlier qualifying matches. Whereas obviously qualification ranking is a consequence of performance in earlier matches.

Good practice would use disjoint training and testing sets. I’m sure this analysis has been performed in previous seasons but I didn’t see it from a brief search of CD.

Interestingly the simple baseline heuristic of “alliance with lower team numbers” has 59.1% predictive power for qualification matches this season. I’m assume that OPR and CCWM are better than that, but not as good as the ~82% claimed above.

That’s a valid criticism, which could be addressed by using Weeks 1 thru 4 OPR numbers to predict Week5 outcomes. That would take a little bit more work, since Week5 may have teams which are competing for the first time, so they would have no OPR values. Those matches would have to be omitted from the analysis.

At Boilermaker Regional I calculated the OPR of teams using their Friday match results (about 8 or 9 matches), and tracked the qualification results on Saturday. OPR predictions were 20 for 24.

I’m guessing the easiest place to track predictive power would be the Championship event.

I’d be interested in substituting unknown values with world averages. I’ll probably do this during Crossroads until each team has played 5 matches.

OPR and CCWM are calculated using only qualification match results, right?

So, one could test their predictive value using the elimination results of that event.

OK, here’s qual matches of Week5 events being predicted by OPR World rank based on weeks 1 thru 4 data only. 72% correct

This is a pessimistic estimate, since data gathered during the Week5 events for already-played qual matches could be used to supplement the weeks 1 thru 4 data to predict future qual matches in the Week5 events.

Based on the earlier post by efoote868, future qual matches at any given event are best predicted by OPR of already-played matches at that event, once a sufficient number of matches has been played.

Forgot to mention: columns V thru AA list the teams for each match for which no week 1 thru 4 OPR data is available.

OPRw4p5.xls (297 KB)


OPRw4p5.xls (297 KB)