# Is OPR an accurate measurement system?

First off, I would like to start this with a warning. I am NOT trying to degrade any teams or try to say that they are not as good as they were, and i fully understand that teams do their best in this competition and I am proud to be able to associate myself with these teams.

Now. I don’t know exactly how OPR is equated. But right now I am seeing a lot of people looking at OPR more than anything for finding the best teams in FRC.

Im looking at Northern Lights specifically because my team was there. But TBA has teams with OPR’s that are significantly less than they should be. 5232 for example, was breaching their defenses with about half of the match left. Now. The lowest OPR out of the 15 is 28.27 points… now. The largest miss in my mind is 5232 (Talons). Lets say 5232 was challenging (which they were), and breaking 3 defenses (which they were). Even outside of elims where you get 20 points for the breach, their contribution to their alliance is 35… 5232 isn’t even in the top 15. Now i get that there can be problems with a system but in Northern Lights alone I can come up with 5 or 6 teams off the top of my head that should be ranked higher.

So if someone could explain this or help me understand how OPR is this amazing system for ranking teams when I am not seeing it accurately representing teams, i question it’s validity. (And I highly urge people to watch Northern Lights and see once the videos come out)

OPR

is not an amazing system for ranking teams. It is a statistical calculation that generally relates the performance level of teams, but it should never be trusted for accuracy.

Ed law in the past has created some amazing data sheets. He along with these data sheets also made http://file:///home/chronos/u-8e3fa3838534cb9d61fe3aaa9032d8623e623f03/Downloads/Team_2834_Scouting_Database_Presentation_2014.pdf
this cool powerpoint explaining opr and ccwm

Robot A is a robot that crosses 10 defenses per match and can therefore score (let’s ignore auto for now) 50 points on their own per match.

Let’s say that Robot A is far and away the best defense crosser at the event - every other team there can only cross 3 defenses per match on average.

In the matches with Robot A and two other robots, the alliance crosses 10 defenses with tons of time left over, and scores 50 points (plus whatever else during auto, from balls, and from endgame).

In the matches without Robot A, three average robots cross 9 defenses (3 each), and scores 45 points (plus whatever else during auto, from balls, and from endgame).

What are the OPRs of the robots at this event with respect to defenses? If we play infinite matches (and assume there are a lot of teams), we will eventually find that the “average” robot’s defense OPR is ~1/3 of their average alliance score, so just north of 15 points (since the score is a bit higher in any matches with Robot A). Robot A, the world’s best defense crossing robot, has an OPR of just under 20 (they account for one extra crossing per match)…<5 points higher than the OPR of a robot that is less than 1/3 as capable at this aspect of the game.

This is obviously an oversimplification, but it goes to show that because of the finite number of crossings that can be rewarded per match, excelling at this aspect of the game does not actually get that well rewarded on the scoreboard (and it will be even less rewarded as the season goes on and drivetrains have their kinks ironed out). This of course does not factor in second-order benefits like an exceptional crosser freeing up teammates to score balls, etc.

You linked to a local file on your computer, I think you meant this.

What OPR / CCWM always excelled at wasn’t in being an absolute ranking of teams. It is a better sort than average score or ranking.

In games where the scoring actions of different teammates are more separable, like in 2010 or 2013, OPR is more accurate. In games where scoring actions are less separable, like 2014, OPR is much less accurate.

OPR never will be better than actual data at ranking the quality of teams, and a team’s OPR will never exactly match the team’s actual scoring point. It’s just a rough starting point that is a better place to start than other methods in the absence of actual match data.

But my question still stands. How valid is OPR

if there are teams that can be tossed out of rank just because of a poor schedule, their teammates being broken, etc?

Karthik’s views on OPR. YMMV.

Yep you’re right. I always forget chromebooks do stupid things. It seems like they randomly generated a link for that pdf that I downloaded since chromebooks do hardly anything offline

Since OPR

is calculated under the implication that every team is playing at their normal ability every match, any situation where a team is playing below (or above) their ability is going to mess up OPR
calculations not only for them but for other teams in their matches. Same goes for DPR
(which essentially calculates how many points a team allows their opponents to score per match).

If you assume that every robot makes the same contribution to the score in every match regardless of their teammates, it is a perfect model.

The less that assumption holds, the less the model is perfect. It’s usually accurate for gross estimation of team ability (top quartile vs. bottom quartile, etc.) and for finding outliers (the rare team that is several standard deviations better than the mean), but I wouldn’t trust it too much beyond that, especially early in the season (where match-to-match contributions tend to vary a lot).

OPR is a least squares solution to an over constrained matrix.

If you’ve ever done statistics at school, you can think of it sort of like a linear regression, but with more than two variables. If you’ve got 3 points that form a triangle on a scatter plot, you can’t make a single line go through them all. So, you do a “best fit line” knowing there will be some error in your regression.

When there is a strong correlation between OPR and actual contribution like in this example:

OPR is very well suited to assess a team’s point contribution in a match. We are most likely to see a strong correlation between OPR and actual point contribution in years when scoring is linear and non-excludable. For example, in 2013 if you scored a Frisbee in the high goal it was 3 points…no matter what. 2 Frisbees? 6 points. 10 Frisbees? 30 points. Additionally, one team scoring Frisbees usually did not prevent their partner from scoring Frisbees (except for some cases with FCS draining all discs from the Human Player Stations).

However, sometimes it is a weaker correlation, more like this:
http://surveyanalysis.org/images/thumb/d/dc/WeakPositiveCorrelation.png/400px-WeakPositiveCorrelation.png
This is usually observed when there is some non-linearity in scoring or excludability between partners. In this years game, defenses are non-linear (only count the first 2 times they are crossed) and excludable among partners (i.e. one team crossing the low bar twice excludes their partner from scoring points for doing so). Excludability, diminishing marginal returns, and plateaus for scoring are generally bad news for using OPR to predict scoring contribution. It gets more muddled when things like the incentives from the ranking system, the random pairing of alliances, etc. come into play. We have a lot of that this year.

In 2015, OPR was more useful because the limit of 3-7 Recycling Containers (depending on canburglarring) was less commonly hit than a breach is this year, especially in qualifying matches. Additionally, your sole ranking incentive was scoring as many points as possible. Thus there weren’t really reasons to deviate from scoring as many points as you could all the time.

Bottom line is understand what OPR generally is before you use it. It IS a useful tool for somewhat understanding a team’s relative contribution at an event (within some margin of error). It IS NOT a reasonable justification for picking a team with an OPR of 30 instead of another team with an OPR of 29. If you’re comparing a team with an OPR of 40 to one with an OPR of 5 and there’s a reasonable sample size? Sure, there’s probably a good reason for the discrepancy.

Ok. This actually makes some sense. I have just been looking at the top 15 at Northern Lights where I had been talking to our scouters constantly and I just didn’t understand the OPR

’s on TBA
were not what I saw or was told. But the linear regression idea makes a lot more since.

OPR using match scores can be misleading.

Finding component OPR numbers can be useful depending on what you are looking for.

Would you like to learn? I can post some links to discussion threads here on CD

that are written at an accessible level.

… and if you have any questions I – and others I’m sure – would be glad to answer them.

To get the right answer, you first have to ask the right question …
[INDENT]And[/INDENT]
“… all models are wrong, but some are useful.”
[INDENT][INDENT][INDENT][INDENT][INDENT][INDENT]George Box[/INDENT][/INDENT][/INDENT][/INDENT][/INDENT][/INDENT]

OPR is what is it is; and the OPR equations compute OPRs 100% accurately.

You need to ask/determine whether OPR is a useful tool for your purpose (or ask what things OPR is useful for).

I personally think that Chairman’s Award submissions are a better (but still imperfect) tool to use than OPR is, if I’m (quoting the OP) searching for “… the best teams in FRC.”

Blake

Some teams have systems where scouters track irregularities like broken robots/ penalties and include that data in the OPR

calculations to make the calculation more accurately reflect the performance of robots.

This. Too many people bash on OPR

, and not enough people bash on the rankings.

Is anyone actually going to be computing component OPRs this year? I believe Ed Law is not doing that this year, so I need to find a new master scouting database to reference.

With the data that FIRST provides through the FRC Event API, we can certainly do much better than your typical OPR. For example, I can pull down that data and know exactly which defenses were on the field and which of those were crossed and damaged in every match any team played in. I can know exactly how many balls were scored in which goals, how many robots challenged, and how many robots climbed. Proper statistical analysis (think OPR, but for each individual category instead of just overall score) can get you much more detailed and specific data. It won’t be the whole story, but I would be willing to bet it would be more accurate than just the overall OPR. And more useful in assembling an eliminations alliance with the capabilities you want.