paper: OPR-based Ranking of FRC Teams over 2008-2013

Thread created automatically to discuss a document in CD-Media.

OPR-based Ranking of FRC Teams over 2008-2013
by: Basel A

A ranking of FRC Teams based on their Average Season OPRs over the past 10 years (each year equally weighted). Includes Average Season Z-score and Average Rank. Now includes Z-score Prediction based on a time-derated average.

I created this spreadsheet for use in a discussion of the top 10 FRC teams of all time, but it may prove interesting for everyone. Using raw data from Team 2834 and Caleb Sykes, I found the Average Event OPR for each team over each season. This data was used to rank teams by their average yearly rank and their average yearly z-score.

I’ve now added future Z-Score Prediction based on a time-derated average. This is the same system Jim Zondag uses in his CMP History Results. It’s a basic guess of how good a team will be in the next year. E.g. for 2015, it’s a time-derated average of 2008-2014 z-scores.

Remember that OPR isn’t perfect or close. Don’t take the results too seriously.

Includes filtering by location (state and/or country) or by district region. Incorrect locations or district are hopefully minimal to non-existent (PM me about any errors). Only the USA and Canada can be filtered by state/province. All other countries use “Other” as the state.

6_Years_of_Average_OPRs.xlsx (943 KB)
Time Derated Power Rating.xlsx (617 KB)
7_Years_of_Average_OPRs + ratings.xlsx (2.31 MB)
8_years wo calculation.xlsx (2.26 MB)
9_years_wo_calculation.xlsx (2.93 MB)
10_years_wo_calculation.xlsx (5.46 MB)
10_years_wo_calculation.xlsx (4.4 MB)

I created this spreadsheet for use in a discussion of the top 10 FRC teams of all time, but there seemed to be significant interest, so I updated it for 2013. Using raw data from Team 2834’s Yearly Scouting Databases, I found the Average Event OPR for each team over each season. This data was used to rank teams by their average yearly rank and their average yearly z-score.

If we are to trust OPR, then 1114 and 2056 are by far the best teams of the past 6 years. If you’re looking for a rough top 10, it goes something like this: 1114, 2056, 987, 67, 254, 469, 111, 1717, 25, & 33.

Remember that OPR isn’t perfect or close. Don’t take the results too seriously.

Includes filtering by location (state and/or country). Incorrect Locations should be minimal to non-existent, but please let me know about errors. Only the USA and Canada can be filtered by state/province. All other countries use “Other” as the state.

By default, the filtering is only showing Michigan. If you can’t find your team, set the filters to all.

Cool resource to look at, Basel. The ranking by state is also fun to look at.

Basel,

Pretty impressive spreadsheet. It’s a very interesting way of trying to objectively determine the top teams. I’m not sure OPR was extremely representative in all years 2008-2013, but much better than nothing that’s for sure.

Without really looking at every team and their position, it would be really hard to argue with a teams position +/- a small (5?, 10?) # of spots.

Very cool, thanks!

Updated this to include the 2014 data. Also added Z-Score Prediction based on a time-derated average (the same system Jim uses in his CMP history chart).

Top 10 got shaken up a bit, probably because the 2014 OPRs are simply awful, almost as bad as 2009. Which is fine, because this is just for fun.

P.S. Why would you download the old ones? Don’t download the old ones…

I updated this for 2017 while thinking about this thread. Lots of data, ranking and projection by location, etc. Many of you will find the results interesting, whether you’ve looked at them in the past or not. Info is available in the paper link from the first post. However, I’m posting here to start a discussion about a problem I’ve been thinking about.

I calculate an OPR Z-Score (basically, number of standard deviations away from average) in order to normalize scores between years. However, the OPR distribution is different each year, so this is an imperfect normalization. This can be best seen by looking at the top Z-Score from each year*:

Year	Top Z-Score
2008	6.25
2009	4.05
2010	6.70
2011	5.15
2012	4.95
2013	5.48
2014	4.58
2015	6.89
2016	5.00
2017	3.63

So in 2017, almost every good team’s projected z-score went down, which means it’s not normalizing across years very well. What’s the best solution? Scaling every Z-Score such that the max from each year is equal? Scaling such that the 75th percentile from each year is equal? Something else?

*I don’t think this is a reflection of how great the best team was each year. Looking at it, I think it’s a measure of how accurate OPR was. The years OPR was best (2015, 2008) have really high Z-Scores, and years OPR was trash (2009, 2017) have lows.

Hi Basel A - Thanks so much for compiling and posting this resource. We are just getting into scouting and it looks like this will be very useful as we move forward. Thanks again! Team 3729. :slight_smile: