|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#1
|
||||
|
||||
|
Is OPR an accurate measurement system?
First off, I would like to start this with a warning. I am NOT trying to degrade any teams or try to say that they are not as good as they were, and i fully understand that teams do their best in this competition and I am proud to be able to associate myself with these teams.
Now. I don't know exactly how OPR is equated. But right now I am seeing a lot of people looking at OPR more than anything for finding the best teams in FRC. Im looking at Northern Lights specifically because my team was there. But TBA has teams with OPR's that are significantly less than they should be. 5232 for example, was breaching their defenses with about half of the match left. Now. The lowest OPR out of the 15 is 28.27 points... now. The largest miss in my mind is 5232 (Talons). Lets say 5232 was challenging (which they were), and breaking 3 defenses (which they were). Even outside of elims where you get 20 points for the breach, their contribution to their alliance is 35... 5232 isn't even in the top 15. Now i get that there can be problems with a system but in Northern Lights alone I can come up with 5 or 6 teams off the top of my head that should be ranked higher. So if someone could explain this or help me understand how OPR is this amazing system for ranking teams when I am not seeing it accurately representing teams, i question it's validity. (And I highly urge people to watch Northern Lights and see once the videos come out) Last edited by CJ_Elliott : 03-07-2016 at 12:50 PM. |
|
#2
|
|||
|
|||
|
Re: Is OPR an accurate measurement system?
Quote:
|
|
#3
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Ed law in the past has created some amazing data sheets. He along with these data sheets also made http://file:///home/chronos/u-8e3fa3...ation_2014.pdf
this cool powerpoint explaining opr and ccwm |
|
#4
|
|||||
|
|||||
|
Re: Is OPR an accurate measurement system?
Robot A is a robot that crosses 10 defenses per match and can therefore score (let's ignore auto for now) 50 points on their own per match.
Let's say that Robot A is far and away the best defense crosser at the event - every other team there can only cross 3 defenses per match on average. In the matches with Robot A and two other robots, the alliance crosses 10 defenses with tons of time left over, and scores 50 points (plus whatever else during auto, from balls, and from endgame). In the matches without Robot A, three average robots cross 9 defenses (3 each), and scores 45 points (plus whatever else during auto, from balls, and from endgame). What are the OPRs of the robots at this event with respect to defenses? If we play infinite matches (and assume there are a lot of teams), we will eventually find that the "average" robot's defense OPR is ~1/3 of their average alliance score, so just north of 15 points (since the score is a bit higher in any matches with Robot A). Robot A, the world's best defense crossing robot, has an OPR of just under 20 (they account for one extra crossing per match)...<5 points higher than the OPR of a robot that is less than 1/3 as capable at this aspect of the game. This is obviously an oversimplification, but it goes to show that because of the finite number of crossings that can be rewarded per match, excelling at this aspect of the game does not actually get that well rewarded on the scoreboard (and it will be even less rewarded as the season goes on and drivetrains have their kinks ironed out). This of course does not factor in second-order benefits like an exceptional crosser freeing up teammates to score balls, etc. Last edited by Jared Russell : 03-07-2016 at 01:09 PM. |
|
#5
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Quote:
|
|
#6
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
What OPR / CCWM always excelled at wasn't in being an absolute ranking of teams. It is a better sort than average score or ranking.
In games where the scoring actions of different teammates are more separable, like in 2010 or 2013, OPR is more accurate. In games where scoring actions are less separable, like 2014, OPR is much less accurate. OPR never will be better than actual data at ranking the quality of teams, and a team's OPR will never exactly match the team's actual scoring point. It's just a rough starting point that is a better place to start than other methods in the absence of actual match data. |
|
#7
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Quote:
|
|
#8
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Karthik's views on OPR. YMMV.
|
|
#9
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Quote:
|
|
#10
|
|||
|
|||
|
Re: Is OPR an accurate measurement system?
Since OPR is calculated under the implication that every team is playing at their normal ability every match, any situation where a team is playing below (or above) their ability is going to mess up OPR calculations not only for them but for other teams in their matches. Same goes for DPR (which essentially calculates how many points a team allows their opponents to score per match).
|
|
#11
|
|||||
|
|||||
|
Re: Is OPR an accurate measurement system?
Quote:
The less that assumption holds, the less the model is perfect. It's usually accurate for gross estimation of team ability (top quartile vs. bottom quartile, etc.) and for finding outliers (the rare team that is several standard deviations better than the mean), but I wouldn't trust it too much beyond that, especially early in the season (where match-to-match contributions tend to vary a lot). |
|
#12
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
OPR is a least squares solution to an over constrained matrix.
If you've ever done statistics at school, you can think of it sort of like a linear regression, but with more than two variables. If you've got 3 points that form a triangle on a scatter plot, you can't make a single line go through them all. So, you do a "best fit line" knowing there will be some error in your regression. When there is a strong correlation between OPR and actual contribution like in this example: http://www.mrholloman.net/SCP/Notes/...9/image006.png OPR is very well suited to assess a team's point contribution in a match. We are most likely to see a strong correlation between OPR and actual point contribution in years when scoring is linear and non-excludable. For example, in 2013 if you scored a Frisbee in the high goal it was 3 points...no matter what. 2 Frisbees? 6 points. 10 Frisbees? 30 points. Additionally, one team scoring Frisbees usually did not prevent their partner from scoring Frisbees (except for some cases with FCS draining all discs from the Human Player Stations). However, sometimes it is a weaker correlation, more like this: http://surveyanalysis.org/images/thu...orrelation.png This is usually observed when there is some non-linearity in scoring or excludability between partners. In this years game, defenses are non-linear (only count the first 2 times they are crossed) and excludable among partners (i.e. one team crossing the low bar twice excludes their partner from scoring points for doing so). Excludability, diminishing marginal returns, and plateaus for scoring are generally bad news for using OPR to predict scoring contribution. It gets more muddled when things like the incentives from the ranking system, the random pairing of alliances, etc. come into play. We have a lot of that this year. In 2015, OPR was more useful because the limit of 3-7 Recycling Containers (depending on canburglarring) was less commonly hit than a breach is this year, especially in qualifying matches. Additionally, your sole ranking incentive was scoring as many points as possible. Thus there weren't really reasons to deviate from scoring as many points as you could all the time. Bottom line is understand what OPR generally is before you use it. It IS a useful tool for somewhat understanding a team's relative contribution at an event (within some margin of error). It IS NOT a reasonable justification for picking a team with an OPR of 30 instead of another team with an OPR of 29. If you're comparing a team with an OPR of 40 to one with an OPR of 5 and there's a reasonable sample size? Sure, there's probably a good reason for the discrepancy. |
|
#13
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
Quote:
|
|
#14
|
|||
|
|||
|
Re: Is OPR an accurate measurement system?
Simple answer: It's not a great measurement this year but it's certainly better than the rankings
|
|
#15
|
||||
|
||||
|
Re: Is OPR an accurate measurement system?
OPR using match scores can be misleading.
Finding component OPR numbers can be useful depending on what you are looking for. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|